HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
RustConf 2017 - Closing Keynote: Safe Systems Software and the Future of Computing by Joe Duffy

Confreaks · Youtube · 7 HN points · 12 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Confreaks's video "RustConf 2017 - Closing Keynote: Safe Systems Software and the Future of Computing by Joe Duffy".
Youtube Summary
Closing Keynote: Safe Systems Software and the Future of Computing by Joe Duffy

Someday in the future, all important systems software on the planet will be written in a safe programming language. The questions are, when, and how do we get there?

In this talk, I will describe my experiences at Microsoft building a new operating system written entirely in a Rust-like safe systems language. I will also talk about my subsequent efforts taking those experiences and applying them to the heart of Windows, and the associated technical and cultural challenges.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Regarding Midori, besides the blog posts, Joe Duffy did two talks about the subject,

"RustConf 2017 - Closing Keynote: Safe Systems Software and the Future of Computing by Joe Duffy"

https://www.youtube.com/watch?v=CuD7SCqHB7k

"Safe Systems Programming in C# and .NET"

https://www.infoq.com/presentations/csharp-systems-programmi...

In one of them, he mentions that even with Midori proving its value to the Windows team, they were quite dimissive of it.

It appears to have also been yet another victim of the usual DevDiv vs WinDev politics.

The usual politics, from .NET vs C++ at Microsoft.

Here Joe Duffy mentions towards the end that even with Midori running in front of them, the Windows team was sceptical of it.

https://www.youtube.com/watch?v=CuD7SCqHB7k

Since .NET's introduction, Microsoft seems to lack the same kind of culture that Apple and Google have towards into steering their platforms into safer languages (e.g. how constrained NDK happens to be, or first class bindings to all OS APIs in Swift).

It appears that every attempt to do so ends up being sabotaged in some way to assure C++'s reign at Microsoft and Windows subsystems.

Note that Windows is the only desktop/mobile OS where the GUI stack is still fully C++ aware, and they even make a point out of it.

https://microsoft.github.io/microsoft-ui-xaml

> WinUI is powered by a highly optimized C++ core that delivers blistering performance, long battery life, and responsive interactivity that professional developers demand. Its lower system utilization allows it to run on a wider range of hardware, ensuring your sophisticated workloads run with ease.

hulitu
As far as i know .NET binaries cannot run from a network share which is a big limitation.
pjmlp
When there are issues and one owns the runtime, there is a way to fix them, when there's a willingness.
teh_klev
Not actually correct. You needed to fiddle about with code access security using the caspol tool and then they'd run just fine.

.NET 4 disables CAS by default though there can still be a bit of faff to get an exe to launch from a file share, but it is doable.

.NET core abandoned CAS (https://docs.microsoft.com/en-us/dotnet/core/compatibility/c...)

tcbawo
Perhaps due to a pervasive desire to maintain backwards compatibility, bugs and all?
sterlind
nah, they had Drawbridge running on Midori, which allowed nearly perfect app compat. it really was a case of internal politics as far as I know.

though another way of looking at it is that rewriting the entirety of Windows in Midori would be been a monumental feat, and by the time Satya became CEO it was clear Windows wasn't the future of the company, so that kind of investment didn't make sense.

pacaro
Maybe, but my recollection from the time (as related to me by Bryan Willman) was that a group of base team architects did an analysis of the .Net runtime and created a list of technical concerns that they felt precluded it's use in any critical component of the OS.

Of course, all of these concerns could presumably have been addressed or mitigated, but by the time analysis was released it was too late

pjmlp
That is exactly my view, instead of addressing the issues, like Apple and Google have been doing the last decade, the decision was to double down on C++ and COM instead.
pacaro
Yes. And to a certain extent a **-swinging contest between DevDiv and the base team
gedy
My previous company had a C++ portion/team and wow were they defensive about changing anything :-)
threefour
We know how well that worked for Symbian.
pjmlp
Also politics.

Back in the day Nokia did road shows to gather employees feedback before going public.

One point regarding Maemo that I and others did, was the missing radio link.

Naturally that was a no go as they would eat into Symbian's turf.

I happened to be in Espoo shortly after the burning platforms memo, and it did not land well, specially since the Qt and PIPS effort was finally gathering some support.

pjmlp
That doesn't justify why Managed Direct X, XNA, Silverlight (on WP 7), .NET Native got the axe.

Even the Longhorn failure, which resulted on the "everything COM", that then evolved into WinRT (basically COM + IInspectable + .NET metadata + sandboxing), could have worked out if everyone actually worked together.

I don't believe that if there was actually a willingness from Windows/C++ crowd, they couldn't have helped to push the .NET runtime into improvements similar to .NET Native.

Or to put into another way, the efforts done by Google and Apple improving the Objective-C, Swift and ART, respectively.

In fact, probably the reasoning behind bringing Midori learnings into .NET Core (thus making C# into D like), has more to do with C++/CLI being Windows only, and the managed languages competition outside Windows than anything else.

zamalek
> could have worked out if everyone actually worked together

Apparently, Sinofsky suffers dreadfully from "not invented here syndrome." He simply does not trust anything that his team has not built, and Midori is the tip of the iceberg. There are a some instances where his attitude worked, but they were far and few between.

You'll notice that there is a very distinct "firewall" between core architecture and "other teams" on the projects he managed, even today (e.g. .Net Office extensions are just COM).

pjmlp
Indeed that is quite clear in between the lines on MSJ, Channel 9, blogs, PDC, BUILD, among others, since Visual Studio.NET came out.
To fully get where I am coming from, you have to go back to when .NET was released.

.NET was supposed to be the great reunification of VB, C++ and COM runtimes, then also got a Java touch into the mix and .NET happenend (initially was known as Ext-VOS).

https://docs.microsoft.com/en-gb/archive/blogs/dsyme/more-c-...

Hence why CLR is just like WASM + GC if you prefer a modern comparisasion.

If you go back into web archives, when Visual Studio.NET was released, it was going to be .NET everywhere, across the whole stack.

However a big management mistake happened, .NET was part of DevTools business unit, while C++ was kept under WinDev, up until Satya started to change the culture, it has been pretty much WinDev vs DevTools.

So Managed DirectX comes, eventually gets killed, XNA and Silverlight take over Windows Phone 7, get killed by WinRT and DirectXTK and so on.

Going back to the originally statement, if you Google for why Longhorn did not work out, you will find many .NET blaming.

https://hackernoon.com/what-really-happened-with-vista-4ca7f...

Yet Android, ChromeOS, Midori are examples of what happens when actually everyone works into the same direction bringing an OS into production.

Joe Duffy does some remarks on his two talks, where he hints at why it was a failure to fight Windows culture

"Systems Programming in C# " - https://www.infoq.com/presentations/csharp-systems-programmi...

"Safe Systems Software and the Future of Computing" - https://www.youtube.com/watch?v=CuD7SCqHB7k

Note that for some time the Asian Bing nodes were actually running on top of Midori as production test.

A big decision of Vista, was to replicate the .NET design using COM instead (hello WinDev), hence why all major modern Windows APIs are now COM based.

Windows 8 doubled down on that by introducing WinRT, with AOT compiled .NET and C++/CX using COM as the future Windows runtime, this was a point of friction, as .NET Native isn't 100% compatible with regular .NET, and many C++ devs desliked C++/CX extensions (later C++/WinRT replaced C++/CX, but that is another story).

So to sort out all the adoption chaos, Project Reunion was born, which is basically merging the COM improvments brought by WinRT and app sandbox into Win32, and forgeting the split ever happened.

Even Reunion has had a couple of hicups, it started as XAML islands, it became eventually clear that that alone wouldn't do it, thus Project Reunion.

https://blogs.windows.com/windowsdeveloper/2020/05/19/develo...

And now a year later, it was renamed as Windows App SDK.

https://blogs.windows.com/windowsdeveloper/2021/06/24/what-w...

Note that many System C# features now live in C# 7 and later versions, and were also in the basis of C++ Core Guidelines.

Also note an example of the internal competition with the pleothora of GUIs being done now, Forms, WPF, WinUI, MAUI, Blazor, React Native for Windows.

Maybe if all divisions worked more together in Longhorn, the project would actually happened and Vista wouldn't have been needed, nor the strong emphasis on COM that it started.

mastax
Thanks for the context. It's very frustrating as a .NET developer that infighting set back .NET GUI development by 10 years. There's still no supported way to use DirectX from .NET. All the new GUI tech is moving in the right direction but is unfinished to the point that still only WPF and WinForms can meet my requirements. I really wanted to ditch WPF since the DirectX 11 -> DirectX 9 (WPF) interop is so hacky.
pjmlp
Unfortunately we are better off with community efforts, the DirectX team is really deep into C++ mindset and nothing else, no wonder it belongs to WinDev side.

https://github.com/microsoft/WindowsAppSDK/issues/14#issueco...

Joe spoke on this topic at RustConf a few years back https://www.youtube.com/watch?v=CuD7SCqHB7k
See also "Safe Systems Software and the Future of Computing" by Joe Duffy, the closing keynote at RustConf 2017:

https://www.youtube.com/watch?v=CuD7SCqHB7k

Midori and Rust have several striking similarities, and Microsoft's recent uptick in interest in Rust (and Rust-like languages) bodes well for improved software quality.

oaiey
The language team of that time participated also in the C# 7.2/7.3 performance wave and the technology which makes .NET Core so we'll performing nowadays.

Rust moves system programming up. At the same time, C# moves application development down. Interesting.

Animats
"Three safeties: memory, type, concurrency, for all code."

Right. I've been saying that for, sadly, decades now, ever since my proof of correctness days. I sometimes word this as the three big questions: "how big is it", "who owns it", and "who locks it". C is notorious for totally failing to address any of those, which is the cause of many of the troubles in computing.

Then, for way too long, we had the "but we can't afford memory safety" era, or "Python/Java/Javascript/whatever are too slow". It took way too long to get past that. Ada was supposed to address safety with speed, but it never caught on.

Rust looked promising at first. Huge step in the right direction with the borrow checker. But Rust was captured by the template and functional people, and seems to have become too complicated to replace C and C++ for the average programmer. We have yet to see the simplicity and stability of Go combined with the safety of Rust.

Duffy makes the point that slices are an important construct for safety. Most things expressed with pointer arithmetic in C are expressible as slices. I once proposed slices for C [1] but the political problems outweigh the technical ones. I'd like to see a C/C++ to Rust translator smart enough to convert pointer arithmetic to safe slice syntax. I've seen one that just generates C pointers in Rust syntax, which is not that useful.

The Midori people seem to have gone through the effort of understanding why "unsafe" code was being used, and tried to formalize that area to make it safe again. When I looked at Rust libraries a few years ago, I saw "unsafe" too often, and not just in code that interfaces with other languages.

Duffy writes, in connection with casting "For example, we did have certain types that were just buckets of bits. But these were just PODs." (Plain Old Data, not some Apple product.) I referred to these as "fully mapped types" - that is, any bit pattern is valid for the type. True for integers and raw characters. Not true for enums, etc. One way to look at casts is to treat them as constructors to be optimized. The constructor takes in an array of bytes and outputs a typed object. If the representation of both is identical, the constructor can be optimized into a byte copy, and maybe even into just returning a reference, if the output is immutable or the constructor consumes the input. So that's a way to look at sound casting.

Once you have that, you can separate allocation from construction and treat a constructor as something that takes in an array of bytes, consumes it, and outputs a typed object. Constructors are now isolated from the memory allocator.

For arrays, you need some way to talk about partially initialized arrays within the language. Then you can build arrays which grow as safe code.

That takes care of some of the common cases for unsafe code. It looks like Duffy's group got that far and went further. I need to read the papers.

[1] http://animats.com/papers/languages/safearraysforc43.pdf

Cladode

   partially initialized arrays
I agree that partially initialized data structures are important for low-level, performance-oriented programming. But it is not clear how to do this within a Rust-style type-based approach to memory safety. Naturally, one can always wrap the problematic code that deals with partially initialised data in an unsafe-block, but you can already do that in Rust. By Rice's theorem we know that we cannot have a feasible typing system that allows us always to deal with partially initialised data in a safe way, but could there be a compromise, covering most of the standard uses of partially initialised data (e.g. what you find in a standard algorithm text book like CLRS, or a standard library like C++'s)? I don't see how right now, because the order of initialisation can be quite complex, too complex to fit neatly into well-bracked lifetimes a la Rust.

Have you got any ideas how to do better?

Animats
Sure, having dealt with this formally in 1981 with machine proofs.[1]

In the Pascal-F verifier, we treated array cells as if they had a "defined()" predicate. Clearly, if you have a flag indicating whether an array cell was defined, you could handle partially initialized arrays.

Then you prove that such a flag is unnecessary.

We had a predicate

    DEFINED(a,i,j)
which means the entries of the array a from i to j are initialized. Theorems about DEFINED are:

    j<i => DEFINED(a,i,j) // empty case is defined 

    DEFINED(a,i,j) AND DEFINED(a,j+1,k) => DEFINED(a,i,k) // extension

    DEFINED(a[i]) => DEFINED(a,i,i) // single element defined
 
    DEFINED(a,i,j) AND k >= i AND k <= j => DEFINED(a[k]) // elt is defined
Given this, you can do inductive proofs of definedness as you use more slots in an array that started out un-initialized.

To do proofs like this, you have to be able to express how much of the array is defined from variables in the program. Then you have to prove that for each change, the "defined" property is preserved. Not that hard when you're just growing an array. You can construct data structures where this is hard, such as some kinds of hash tables, but if you can't prove them safe, they probably have a bug.

This was new 40 years ago, but many others have been down this road since. Rice's theorem isn't a problem. That just says that you can construct some undecidable programs, not that all programs are undecidable. If your program is undecidable or the computation required to decide it is enormous, it's broken. Microsoft puts a time limit on their static driver verifier - if they can't prove safety after some number of minutes, your driver needs revision. Really, if you're writing kernel code and you're anywhere near undecidability, you're doing it wrong.

[1] http://www.animats.com/papers/verifier/verifiermanual.pdf

Cladode
Thanks.

We were talking cross-purposes. I was thinking about partially initialized arrays in the context of a Rust-like type-checker. I don't think the analysis you ran in 1981 in a prover is possible in 2020 in a type-checker, at least without putting most of the bits dealing with partial initialization into unsafe-boxes.

It would be great if Rust-style type-based lifetime analysis could cope with partial initialization.

Oct 02, 2018 · pjmlp on A History of .NET Runtimes
At RustConf 2017 keynote presented by Joe Duffy regarding Midori at a certain moment, he mentions that even with Midori running in front of them, WinDev guys were still not open to the idea of such kind of system being possible.

"Safe Systems Software and the Future of Computing"

https://www.youtube.com/watch?v=CuD7SCqHB7k

Sorry a bit lazy to track down the exact moment.

First of all, there is no such language as C/C++.

C++ although tainted by copy-paste compatibility with C89, does offer the language features for security conscious developers to make use of, and C++17 is quite a pleasure to use.

C on the other hand could be nuked for what I care, it has been clear since 1979 that security and C would never go together.

Now regarding your assertions.

z/OS was initially written in a mix of Assembly and PL/I. C and C++ came later into the picture as the system got a POSIX personality.

The way to write libraries exposed to all languages available on z/OS is via the z/OS Language Environment.

Chapter 9.12 of z/OS Basics, https://www.redbooks.ibm.com/redbooks/pdfs/sg246366.pdf

.NET has hardly any pure C code, rather C++.

Even so, the C# and VB.NET compilers have been bootstrapped thanks to Rosylin and now C++ is gone from the compiler side. F# was bootstrapped from the early days.

Since Roslyn came into production, the .NET team started planning moving parts of the runtime from C++ to C#.

"So, in my view, the primary trend is moving our existing C++ codebase to C#. It makes us so much more efficient and enables a broader set of .NET developers to reason about the base platform more easily and also contribute."

https://www.infoq.com/articles/virtual-panel-dotnet-future

Also the first version of .NET's GC was actually prototyped in Common Lisp,

https://blogs.msdn.microsoft.com/patrick_dussud/2006/11/21/h...

Unity introduced HPC# at GDC 2018 and they are now in the process of migrating C++ engine code into HPC#.

Both the requirements of HPC# as C# subset for performance critical code, as the experience with Singularity and Midori drove the design of C# 7.x features regarding low level GC free data structure management.

"Evolving Unity"

https://www.youtube.com/watch?v=aFFLEiDr3T0&list=PLX2vGYjWbI...

"Safe Systems Programming in C# and .NET"

https://www.infoq.com/presentations/csharp-systems-programmi...

"Safe Systems Software and the Future of Computing"

https://www.youtube.com/watch?v=CuD7SCqHB7k

Dalvik was written in C++ and is dead since Android 5.0.

ART was written in a mix of Java and C++ between Android 5 and 6 as AOT compiler at installation time.

It was rebooted for Android 7, where it is now a mix of an interpreter written in highly optimized Assembly, with JIT/AOT compilers written in a mix of Java and C++, making use of PGO data.

On Android Things, userspace device drivers are written in Java.

https://developer.android.com/things/sdk/drivers/

Sun back in the day toyed with the idea of having Java on the Solaris kernel,

https://www.researchgate.net/publication/220938922_Writing_S...

And on SunSPOT devices, http://www.sunspotdev.org/

Fiji VM and PTC Perc Ultra can be compiled AOT to native code and run Java bare-metal.

https://www.ptc.com/en/products/developer-tools/perc

http://fiji-systems.com/

Oracle is also in the process of following JikesRVM example, and with the help of Graal, bootstrap Java thus reducing the dependency on C++, via the Project Metropolis.

https://www.youtube.com/watch?v=OMk5KoUIOy4

Graal, which incidently has better

LLVM is written in C++. Yes it does expose a C API, but it also has bindings for other languages.

At WWDC 2017, Apple announced that launchd and the dock were rewritten in Swift. I expect other OS components to be announced at this years' WWDC.

Windows kernel was written in C. Since Windows 8, C++ is officially supported on the kernel and given the company's stance on usefulness of C, they have been migrating the code to compile as C++.

"We do not plan to support ISO C features that are not part of either C90 or ISO C++"

https://herbsutter.com/2012/05/03/reader-qa-what-about-vc-an...

Now Visual C++ has been updated up to C11 library compatibility, as per ISO C++17 compliance requirement, that's all.

"We have converted most of the CRT sources to compile as C++, enabling us to replace many ugly C idioms with simpler and more advanced C++ constructs"

https://blogs.msdn.microsoft.com/vcblog/2014/06/10/the-great...

https://www.reddit.com/r/cpp/comments/4oruo1/windows_10_code...

Fuchsia's TCP/IP stack, WLAN services, disk management, package manager, update service are written in Go.

https://groups.google.com/forum/#!msg/golang-dev/2xuYHcP0Fdc...

https://fuchsia.googlesource.com/garnet/+/master/go/src

Genode OS, ARM Mbed OS and Arduino Wiring are written in C++.

As I mentioned, change requires replacing generations one person at a time.

First lets turn C into the COBOL of systems programming, then we worry about C++ afterwards.

nineteen999
Thank you for the long list of examples. I think some of them are a little fringe and fall outside the scope of my argument (eg. Sun literally "toying" with Java device drivers, or Microsoft compiling their C code with a C++ compiler), and you haven't refuted my point that the kernels of the most popular systems are still written in C for the most part.

Listing endless reams of discontinued research systems (eg. Singularity and Midori) isn't reinforcing your point.

Redox (Rust) and Zircon (C++) are more what I'm alluding to. However Redox AFAIK doesn't even have USB drivers yet and Fuschia is even less useful in its current state. These systems have to be available, and I venture, usable, in order to be able to displace eg. Linux which is already both of those things.

I'm hopeful we can see more progress in the next few years on some of these. It's nice to see some serious attempts in this space, however with the current pervasiveness of and dependencies on C at so many layers of these systems, I suspect it is really going to take much longer than hoped.

pjmlp
Singularity and Midori were killed by political reasons, you just need to see Joe Duffy stories about how it all went down.

However .NET AOT compilation on Windows 8 was taken from Singularity Bartok's compiler, while Midori influenced async/await, TPL, improved references on C# 7.x and .NET Native on UWP.

I guess you missed my "As I mentioned, change requires replacing generations one person at a time.".

So yeah, it is going to take awhile until all those devs and managers that are religiously against safe systems programming are gone, replaced by newer generations with more open mind.

The only way to convince Luddites is to wait for change of generations, unfortunately also means one doesn't get to see change him/herself.

Mar 11, 2018 · pjmlp on D on embedded Linux (ARM)
It has been proven multiple times, at Xerox PARC, ETHZ, DEC Olivetti, Microsoft that garbage collection systems programming languages are doable.

However technology adoption always needs to fight against non-believers.

Joe Duffy has stated on his Rustconf 2017 presentation [0] that even Midori running on front for their eyes, with some use cases where System C# was even shoulder-with-shoulder with C++ [1], that Windows devteam was not getting it.

Nowadays he is an happy Go coder instead of trying to convince them otherwise.

D has a big problem with their GC implementation, which provides good arguments for the anti-GC crowd in systems programming.

However they have been improving it quite a bit during the latest releases, as well as, making the runtime library more @nogc friendly.

As for reaching mainstream, I share your thoughts. It is not only Rust and Go. There is also Swift, C++17 and the ongoing improvements to make Java and C# more machine friendly (memory management and AOT compilation).

Still, even if just a small group gets to use it, it is already a victory.

Most languages never achieve it.

[0] - https://www.youtube.com/watch?v=CuD7SCqHB7k [1] - http://joeduffyblog.com/2015/12/19/safe-native-code/

HumanDrivenDev
> It has been proven multiple times, at Xerox PARC, ETHZ, DEC Olivetti, Microsoft that garbage collection systems programming languages are doable.

It's my understanding that D's garbage collection isn't considered very performant. I know it used to receive a lot of criticism.

pjmlp
Which I happen to address on my comment....
I don't know why but I'm actually less excited about Fuchsia than the old Singularity/Midori. As Joe Duffy said [0] when they worked on Midori they also wrote UI, editors, browsers and many more applications in months so it's not surprising Google engineers also managed that.

[0]: https://www.youtube.com/watch?v=CuD7SCqHB7k

For me it looks like the classic Not Invented Here syndrome - we already have microkernels, even formally verified ones (seL4) but AFAIK Google didn't want that to "be more flexible".

Using C for kernel also looks like a poor choice nowadays, unless Google has some magic static analysis (for example Singularity/Midori had custom languages with ownership tracking akin to Rust).

Maybe Fuchsia is for bringing microkernel architecture for masses using "boring technology"?

markonen
Here's a quick ELI5 style question: what are the implications of Meltdown mitigations on microkernel architectures (vs monolithic designs)?
Promarged
Singularity used Software Isolated Processes. Basically the apps were distributed as .NET bytecode, compiled by Bartok on the target system to native code. That guaranteed that processes never accessed each other's memory so you wouldn't need regular process isolation.

I haven't thought about the implications to reading kernel memory but it'd be great to see a paper on the subject.

It doesn't matter what Rob Pike says regarding Go and systems programming, because Google, the company that employs him, has decided Go makes sense to write system components of Fucshia in Go.

That is a fact, easily validated in Fuchsia's repository.

According to you, Rob Pike should call management and let them know it is not a good idea to use Go for writing Fuchsia's TCP/IP stack.

You only read Joe's blog years ago, yet you missed the talks he gave.

Two years ago is not that long time ago,

http://joeduffyblog.com/2015/11/03/blogging-about-midori/

"My biggest regret is that we didn’t OSS it from the start, where the meritocracy of the Internet could judge its pieces appropriately. As with all big corporations, decisions around the destiny of Midori’s core technology weren’t entirely technology-driven, and sadly, not even entirely business-driven. But therein lies some important lessons too."

http://joeduffyblog.com/2015/12/19/safe-native-code/

"Over the course of 8 years, we were able to significantly narrow the gap between our version of C# and classical C/C++ systems, to the point where basic code quality, in both size of speed dimensions, was seldom the deciding factor when comparing Midori’s performance to existing workloads. In fact, something counter-intuitive happened. The ability to co-design the language, runtime, frameworks, operating system, and the compiler – making tradeoffs in one area to gain advantages in other areas – gave the compiler far more symbolic information than it ever had before about the program’s semantics and, so, I dare say, was able to exceed C and C++ performance in a non-trivial number of situations."

"Safe Systems Programming in C# and .NET"

https://www.infoq.com/presentations/csharp-systems-programmi...

"RustConf 2017 - Closing Keynote: Safe Systems Software and the Future of Computing"

https://www.youtube.com/watch?v=CuD7SCqHB7k

Around minute 30 he starts describing the uphill battle to convince other Microsoft teams to accept Midori achievements.

BuckRogers
A TCP/IP stack isn't a stringent litmus test for a systems PL. So you can loosen your own standard for what comprises a systems PL, but Go fails more rigorous standards. Everytime. Even Rob Pike says you're wrong and he's certainly a sympathetic character towards Go.

All of this was hashed out years ago upon Go's release, and the Go team took down the "systems language" moniker. That was the end of it. Google knows it's not going to fulfill stringent litmus tests for a systems-language. We'll review the subject again if the folks at Google are brazen enough to ever add it back to the website.

Now Rob doesn't have to call management and let them know it's not for systems programming, Rob & Google already know it's not. He said so himself. You're the only one in the dark on that front at this point.

pjmlp
Unlike yourself I don't care what Rob says, because at the end of the day what matters is what people on the street are using Go for.

Developers don't have to ask him permission or blessing for whatever they are trying to do with Go.

Google is using Go in system components in Fucshia, and the new GPU debugger for Android is also written in Go.

You are in the one in the dark about using GC enabled systems programming languages. I have real life experience using them and have seen it works.

I lost interest on Go due to its spartan design, but certainly will encourage anyone trying to use it to build an OS from scratch, regardless of what is written in a web site.

"They did not know it was impossible so they did it"

-- Mark Twain

BuckRogers
The reason Rob's statements matter is because he's the authority on Go. If you had been a primary inventor of a new language, I'm guaranteeing you'd feel entitled to decide what it is and is not. Not have "pjmlp" debate endlessly over something you had already stated was simply not true.

A lot of things work, doesn't mean they meet strict litmus tests (which matter, because definitions matter), nor does it mean it's a good idea.

pjmlp
When a language is set loose to the world it belongs to its users, and whatever they decide to do with it.

Anyone is free to just fork Go and do whatever they feel like with it, regardless of what Rob thinks. That is the beauty of MIT license.

So given that Oberon is accepted as a systems programming language in computer science, with multiple published papers and books across two decades, heavily used in OS research at a couple of European universities during the 90's and an influence to Go, with hardware and compilers still being sold for systems programming, although in small scale.

What are the actual technical features Go is lacking that makes it impossible to be used for systems programming, given Oberon's influence on its design?

BuckRogers
Your concerns have already been answered. Most succinctly stated by two points.

1) A lot of things work, it doesn't make them good ideas.

2) Go doesn't meet the strictest litmus tests of a systems language, never did, never will. It's only a systems PL only if you relax your definition of what it is. Go's own creator says it's not and if any one person is the authority on this topic it's him. There are platforms that meet all definitions, those are definitely systems PLs.

pjmlp
If people cared about litmus tests, whatever that means, JavaScript would never have left the browser.

I only care about what Computer Science books and ACM SIGPLAN papers accept as systems programming languages, not random opinions in online forums.

Thankfully both of our opinions are meaningless to Go users.

BuckRogers
It doesn't matter what people like yourself think or attempt with a given technology. This is about technological definitions. Go is not a systems PL by the strictest measures available, that's a fact. Designers of C++, Rust, D and Go for the most part agree on that. You're the only odd man out. Redefining terms or loosening the standards for quality is an intellectually deficient venture. Some may partake in that, myself and others refuse to. It's best to leave it at that, it boils down to intellectual honesty vs dishonesty to push your viewpoint.
steveklabnik
Rust’s designers aren’t really interested in playing the “what is a systems language” game, honestly. We are even considering abandoning the term entirely when describing Rust, as we’re not sure it’s a particularly useful term.
BuckRogers
A strong litmus test as the one given by Andrei (creator of D for those not paying attention), settles the definition well enough. I'll even back off that claim because some absolutely refuse to accept a litmus test to meet definitions of things, as they're so hellbent on not being excluded. Unfortunately that's how language and the world works, not everyone gets their way.

So we'll just settle on once there's a commercially successful operating system built in a GC'd language then I'll eat crow but till then they're all going to be C/C++/D/Rust or built on similar abstraction layers. Go ahead, wash away Linux, iOS and Windows in the competitive landscape with something written in Go. Let's see it, that's believing after all. It'll never happen because it's unsuited and inferior for that purpose.

sagichmal
Rob never said Go wasn’t a systems programming language. He said the pedantic rules lawyering that plagued the internet after the initial claim was made — the tedious pablum that you’re continuing to propagate — was all noise and no signal, and made the claim not worth claiming anymore. For fuck’s sake, give it a rest.
BuckRogers
>Rob never said Go wasn’t a systems programming language.

Yes he did. He said it's a cloud-infrastructure language[0] and that he regrets calling it a systems language. That's why he took systems language off the Golang site.

>For fuck’s sake, give it a rest.

Why would we when we're right? Why don't you give it a rest, you've long been proven wrong.

You know, usage of the English language and the terms within matters. Claiming you have a systems language when you don't introduces confusion. You can loosen the terms all you want to but it will never change the fact that GC languages can never meet the strictest measure of a systems language. That's why it matters. No one should be apologizing to you because you don't like the facts.

[0]https://www.youtube.com/watch?v=BBbv1ej0fFo&feature=youtu.be...

None
None
None
None
We had Joe Duffey talk about it at RustConf this year! https://www.youtube.com/watch?v=CuD7SCqHB7k
Related: "Safe Systems Software and the Future of Computing by Joe Duffy" at RustConf 2017.

https://www.youtube.com/watch?v=CuD7SCqHB7k

I summarized this excellent talk here [1], but one of the main points is that compatibility with existing systems is important for adoption. (They learned that the hard way -- by having their entire project cancelled and almost everything thrown out.) He advocates unit-by-unit rewrites rather than big-bang rewrites, just like Kell does in this conference article.

And compatibility with C in Windows should be easier than it is in the Unix world, because the whole OS is architected around a binary protocol AFAIK -- COM.

My sense is that Rust may not have thought enough about compatibility early in its life. Only later when they ran into adoption problems did they start talking more about compatibility.

Also, it seems Rust competes more with C++ than C, and there seems to be very little attempt to be compatible with C++ (although perhaps that problem is intractable.)

Personally I don't think Rust will be a successful C replacement. It will have some adoption, but the Linux kernel will still be running on bajillions of devices 10 years from now, written in C. And in 20 years, something else will come along to replace either C or Linux, but that thing won't involve Rust.

[1] https://www.reddit.com/r/ProgrammingLanguages/comments/6y6gx...

pcwalton
> My sense is that Rust may not have thought enough about compatibility early in its life. Only later when they ran into adoption problems did they start talking more about compatibility.

Of course Rust thought a lot about compatibility with C in its early days. I remember fast FFI was in Graydon's very first presentation about the language in 2010. Almost everything about the language changed, but that focus did not.

> Also, it seems Rust competes more with C++ than C, and there seems to be very little attempt to be compatible with C++ (although perhaps that problem is intractable.)

Rust has gone pretty far in wanting to be compatible with C++, with the C++ stuff added to bindgen for Stylo. We've gone further than most other languages. It's not fair to say there's been "very little attempt": we literally couldn't have shipped Stylo to Nightly Firefox without doing the work to bridge C++ and Rust.

From your other post, it seems that one of your main complaints is that Cargo exists instead of having Rust use Makefiles. All I can say is that the reaction to Cargo from Rust programmers is overwhelmingly, almost universally positive, and abandoning Cargo in favor of Makefiles would instantly result in a fork of the language that would take Rust's entire userbase. Not solving builds and package management is not a realistic option for a language in 2017.

chubot
Well, just saying it has fast FFI doesn't tell me much. Being able to wrap something like sin() was in Python 1.0, but most applications need more help than that. There have been 5+ popular systems since then trying to make the experience better... it still is barely solved.

That said, I admit I'm more on the pessimistic side. Having touched Go before it's open source release in 2009, I didn't think they thought enough about integration either. I think it was worse than Rust, because you couldn't call Go from C or C++, unless the main program was in Go.

Also their build system isn't used inside Google. And they do nontrivial stuff with signals and threads.

But Go seems to be being adopted. However there is an important distinction: Everybody is rewriting new versions of Google-style servers in the open source world. But all the stuff at Google is still in C++.

So I think nobody ever rewrites old software. They write new versions of similar things, and then hopefully those new things get adopted. But the old thing will probably be around for a long time too.

And to be fair C didn't replace Fortran or Cobol either -- scientific applications still use Fortran and old banks (apparently?) still use Cobol on mainframes.

Maybe that's the most you can expect. But in that case there still does need to be a "plan" for making existing C code like the Linux kernel and OpenSSL safer. I think my issue is that some people apparently think that plan involves Rust when it doesn't. Maybe the core team has never pushed that idea but some other people seem to be under that illusion.

-----

This is a different argument, but a language only needs to "solve" package management if it always assumes it has main(). I was looking for something more humble that you could stick in a file in an existing C or C++ project, e.g. for writing as safe parser.

Also the 5+ different Python + C/C++ solutions now need a Python + Rust analogue. For a language at the Rust layer, there's this O(m*n) problem or strong network effect to deal with.

Actually that was thing I was thinking while reading this PDF -- a lot of it can be boiled down to "C and C++ have network effects". Particularly C++.

Asking Rust to break the network effect is like asking Apple to break the Windows monopoly with Mac OS X. That didn't happen -- they built the new thing iOS, and beat Windows with that. So then the question is if Rust is more like OS X or iOS.

pcwalton
> So I think nobody ever rewrites old software. They write new versions of similar things, and then hopefully those new things get adopted. But the old thing will probably be around for a long time too.

That's very true. The most we can hope for is that Rust and other languages, such as Go and Swift, continue to chip away at the market share of C and C++. It'll be a long process.

I'm not a "rewrite everything in Rust" booster; as much as I would like to, that won't realistically happen. Instead, I see Rust as another player in the "programming language Renaissance" that has been going on since the mid-2000s. C and C++ are losing their dominance and instead are becoming part of a broad ecosystem of languages. And that's great: the fact that we have so many choices in languages now has been a very good thing for productivity and security.

> Actually that was thing I was thinking while reading this PDF -- a lot of it can be boiled down to "C and C++ have network effects". Particularly C++.

I agree. That's why I think this paper overanalyzes the success of C and C++. They became dominant because of network effects: simple as that.

wahern
I think the article helps to explain why C was able to leverage network effects so well. Neither C nor Unix came out of the gate in a dominating position. Indeed, it's arguably only in the past 20 years that it clearly dominated. Fortran, Pascal, and a bevy of other languages were at times much more widespread and influential. Even today C isn't the most used language. And yet it's influence continues to be outsized.

C isn't just a language, it's an entire ecosystem of toolchains and software that facilitate network effects. "Chance" is far too convenient an explanation. No doubt chance had a significant role, but if C were as useless, unsafe, and devoid of redeeming qualities as many people argue, then I don't see how C could have benefited so strongly from network effects.

pcwalton
C wasn't useless and unsafe at the time it became dominant. It was quite state-of-the-art at the time. We've just learned more about what works well and what doesn't in programming languages since 1978, which is why C is no longer as dominant as it once was.
pjmlp
Hoare did not think like that.

"Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."

This was in 1981, and "language designers and users have not learned this lesson." was a jab at C.

Also Xerox and ETHZ were busy using safer systems programming languages.

ESPOL and NEWP, already using UNSAFE blocks were the state of the art systems programing in the 60's.

ktRolster
It was quite state-of-the-art at the time.

Most of the criticisms you hear about C today (type safety, memory safety, no garbage collection) were criticisms that C got when it was first invented.

In fact there are fewer criticisms of C than there used to be: a lot of the early criticism centered around syntax, but the C syntax kind of won, so you don't hear that anymore.

flukus
> All I can say is that the reaction to Cargo from Rust programmers is overwhelmingly, almost universally positive

You can prove anything when you introduce a sampling bias that large.

> Not solving builds and package management is not a realistic option for a language in 2017.

Package management was solved decades ago, my OS manages the packages and a lean and mean system is the result. The rust solution results in massive binary sizes for simple command line tools. This is fine if there goal is to replace java, but not if they want to replace c.

pcwalton
> You can prove anything when you introduce a sampling bias that large.

I'm confident that programmers don't want to be writing Makefiles. We don't have to take formal surveys to observe the obvious trend away from raw "make" that has been occurring for decades.

Besides, if Rust programmers really had a problem with Cargo, they would tell us. Programmers don't suffer in silence.

> Package management was solved decades ago, my OS manages the packages and a lean and mean system is the result.

I'm glad you like your package manager. Most programmers, including me, don't want to have to deal with it when the goal is simply to put a Rust project together. Besides, we ship desktop software on Windows: we cannot tell our users "sorry, you need to install Ubuntu".

> The rust solution results in massive binary sizes for simple command line tools. This is fine if there goal is to replace java, but not if they want to replace c.

The Rust solution is customizable. You can use dynamic libraries if you like, and earlier prerelease versions of Rust did in fact do that. Dynamic libraries are a single rustc flag away.

The feedback we got was that people preferred the convenience of a single standalone binary to the complexity of dynamic linking managed by the OS.

flukus
> I'm glad you like your package manager. Most programmers, including me, don't want to have to deal with it when the goal is simply to put a Rust project together.

This is the same attitude that makes electron so attractive. As a user, I don't care what makes your life easier as a developer, I care that I'm getting a more bloated and less secure result. This is an awful attitude that's creeped into software development lately.

> Besides, we ship desktop software on Windows: we cannot tell our users "sorry, you need to install Ubuntu".

So bundle them on windows, bloated installers are the norm their already. You're probably going to have to include an auto updater and a lot of other stuff that windows doesn't provide as well. Not having to deal with that stuff is part of why I use ubuntu in the first place.

> The Rust solution is customizable. You can use dynamic libraries if you like, and earlier prerelease versions of Rust did in fact do that. Dynamic libraries are a single rustc flag away.

Until there is a stable ABI that isn't a solution because you have to distribute those libraries with the app.

burntsushi
> As a user, I don't care what makes your life easier as a developer

You should. The easier my life is, the faster I can fix bugs and put out new releases.

Nomentatus
I've previously suggested here that OSes and OS manufacturers should test and rate apps for tightness, and punish apps that aren't tight by handing them fewer resources - running them noticeably slower.
flukus
I'm dealing with the result of this attitude on my phone right now. The end result is I can't even install your app because I'm out of space on my phone. I'm out of space because every other app maker favored developer productivity over being conservative with users resources.

It's a tragedy of the commons.

None
None
burntsushi
Seems like you are shifting the goal posts. If I'm building something to run on resource constrained devices, then it makes sense to value use of resources more highly! But otherwise, most of your comments just seem to repeat the same old dynamic vs static linking debate that had been hashed out already for decades. There is no one right answer. Trade offs abound.

People who expect a stable ABI from Rust such that normal Rust libraries can be dynamically linked like you would C libraries would do well to adjust their expectations. It isn't happening any time soon.

flukus
> most of your comments just seem to repeat the same old dynamic vs static linking debate that had been hashed out already for decades. There is no one right answer. Trade offs abound.

Rust doesn't let me make that trade off, it's made the decision for me.

> People who expect a stable ABI from Rust such that normal Rust libraries can be dynamically linked like you would C libraries would do well to adjust their expectations. It isn't happening any time soon.

I think it's the rustaceans that need to adjust their expectations, as long as this holds rust won't be a real systems language, it stands a better chance of unseating java than c.

burntsushi
> Rust doesn't let me make that trade off, it's made the decision for me.

Umm, right, exactly, the state of having a stable ABI is one set of trade offs, and even if that were option, electing to use it for dynamic linking is another set of trade offs. I feel like I was obviously referring to the former, but if that wasn't clear, it should be now. An obvious negative is exactly what you say: you can't use standard Rust libraries like you would C libraries. That's what I meant by trade offs. But there are plenty on the other side of things as well.

> I think it's the rustaceans that need to adjust their expectations

Sure! We do all the time! I'm just trying to tell you the reality of the situation. The reality is that Rust won't be getting a stable ABI (outside of explicitly exporting a stable C ABI) any time soon. If that means flukus doesn't consider Rust a systems language, then that's exactly what I meant by adjusting your expectations. But don't expect everyone to agree with you.

From personal experience, a lot of folks don't care nearly as much as you do about things like "the binary is using 2.6MB instead of the equivalent C binary which is using only 156KB." Now if you're in a resource constrained environment where that size difference is important, then that's a different story, and you might want to spend more effort to use dynamic linking in Rust, which you can do. You won't get a stable ABI across different versions of rustc, but you can still get the size reduction if that's important to you in a specific use case.

steveklabnik
Many existing Rust users were extremely skeptical when Cargo was announced, many said they'd stick with Makefiles. In the end, they didn't.

> Package management was solved decades ago

If it was, there wouldn't be new package managers popping up all the time; it's a non-trivial problem. They're not created for no reason.

> The rust solution results in massive binary sizes for simple command line tools.

This isn't exactly true, or rather, you're comparing two different things. https://lifthrasiir.github.io/rustlog/why-is-a-rust-executab... has the details.

pcwalton
That's a great point: we effectively tried the "just use Makefiles" solution already. It failed.
flukus
> If it was, there wouldn't be new package managers popping up all the time; it's a non-trivial problem. They're not created for no reason.

Notice how all those package managers are for platforms or create platforms in their own right. Rust is meant to be a systems language, that means it's platform is the OS and it doesn't get to be a world unto itself like java.

> This isn't exactly true, or rather, you're comparing two different things. https://lifthrasiir.github.io/rustlog/why-is-a-rust-executab.... has the details.

So if you jump through a million hoops, limit yourself to c libraries you can produce small executables. At that point it's more complicated than just writing an app in c in the first place.

I'm not interested in what it can technically do though, I'm interested in what is practically happening. In practice most rust programmers seem to be writing apps for the cargo platform. In practice rust developers are producing huge executables. In practice rust has no stable ABI so all rust libraries get statically compiled. In practice this is incompatible with the LGPL. In practice a security vulnerability means every app using the library has to be recompiled to be secure.

kibwen
> Rust is meant to be a systems language, that means it's platform is the OS and it doesn't get to be a world unto itself like java.

Rust is meant to be a cross-platform systems language, and sadly there does not exist a cross-platform package manager. Until one exists (and I'm not holding my breath here), every language which intends to be cross-platform will continue inventing its own package management.

wahern
Following the logic of the article, Rust has made the exact same mistake every other language has made, which is to conceptualize compatibility with the C ecosystem has merely an issue FFI. Rust is hardly the first language to focus on easy FFI from day 1, but according to the article that's not nearly sufficient. And like most other modern so-called systems language, Rust hasn't gotten around to committing to a stable, exportable ABI. In fact, I think much like Go the general sentiment is that this is largely undesirable, as stable ABIs can cripple evolution of the implementation, especially those that rely on sophisticated type systems.
chubot
Yes, that is what I was referring to. Calling sin() is not enough. It's messy but C programs need more than that.

And I was also referring to the similar issue in Go where calling C -> Go and Go -> C isn't symmetrical. Not sure if that's true for Rust or not.

pcwalton
> It's messy but C programs need more than that.

Of course they do. That's why Rust has a sophisticated tool, bindgen, which is used in production right now in Nightly Firefox (among other places) to export complex C++ interfaces in both directions across the language boundary.

> And I was also referring to the similar issue in Go where calling C -> Go and Go -> C isn't symmetrical. Not sure if that's true for Rust or not.

It's not. You just write "#[no_mangle] extern" on your function Rust and C can easily call it, with a stable ABI.

In order to meaningfully criticize Rust's FFI, you need to be aware of how it works.

pcwalton
> And like most other modern so-called systems language, Rust hasn't gotten around to committing to a stable, exportable ABI.

That's not true. The C ABI is stable and exportable, and you can opt into it on a per-function basis. We do that for integration with existing projects all the time.

Again: All of you are talking as though the idea of integrating Rust into a large C++ project is some far-fetched theoretical idea, and that we made some obvious mistakes that make this goal impossible. In fact, we're shipping an integrated Rust-C++ project today: stable Firefox, used by millions of users.

wahern
I'm not arguing that it's too difficult integrate Rust with C or C++ projects. I'm simply trying to get at the distinctions that the article is making, which are rather subtle.

One aspect of Rust that fits well, IMO, with the characteristics the article argues are under appreciated is its emphasis on POD--objects as compact, flat bytes. That puts Rust much closer to achieving what C does best (again, according to the article), which is first-class syntactic constructs over memory--namely, pointers. But it falls short in the sense that to _export_ Rust objects (rather than import alien objects into Rust) you have to do so explicitly. And presumably the author would argue that Rust is significantly undervaluing the benefit of a stable ABI that would allow other applications to import Rust objects without an explicit language-level construct (i.e. explicitly annotating APIs with no_mangle).

Obviously when you're building a large application, cathedral style, the requirement to explicitly annotate is not only less burdensome, but quite useful (for many reasons). But in a larger, heterarchical ecosystem of software, that's actually quite limiting. Our first instinct is to argue that permitting such unintended peeking behind the curtain is dangerous and unnecessary, but the article speaks directly to that.

Imagine a Rust with a stable ABI that was exported via Sun's CTF format. CTF is like DWARF but much simpler (and thus little incentive to strip it), and it's being integrated into both OpenBSD and (I think) FreeBSD to facilitate improved dynamic linking infrastructure. Rust could even, theoretically, continue randomizing member fields. And this data could be consumed by any language's toolchain, not simply Rust's toolchain. That sort of language-agnostic, holistic approach to interoperability is largely what I think the article is getting at.

pcwalton
I'd be all for a standard language agnostic ABI. I'm not on the language design team anymore, but I suspect you wouldn't have any trouble convincing them to get on board with such a thing either. The ones you'd need to convince would be the C++ folks, I suspect :)
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.