HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Zig: A programming language designed for robustness, optimality, and clarity –  Andrew Kelley

Recurse Center · Youtube · 356 HN points · 13 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Recurse Center's video "Zig: A programming language designed for robustness, optimality, and clarity –  Andrew Kelley".
Youtube Summary
Zig – a programming language designed for robustness, optimality, and clarity – Andrew Kelley, March 20th, 2018

Localhost is a series of monthly technical talks in NYC, open to the public, and given by members of the Recurse Center community. Go to https://www.recurse.com/localhost for more info.

Abstract: Zig is an LLVM frontend, taking advantage of libclang to automatically import .h files (including macros and inline functions). Zig uses its own linker (LLD) combined with lazily building compiler-rt to provide out-of-the-box cross-compiling for all supported targets. Zig is intended to replace C. It provides high level features such as generics, compile time function execution, and partial evaluation, yet exposes low level LLVM IR features such as aliases. The Zig project believes that software can and should be perfect. Let the Zig Zen guide your spirit down the most optimal path.

Bio: Andrew is an open source programmer, interested in electronic music production and video game development. In the Fall 2013 batch he worked on a music player server and a 3D spaceship flight simulator. Andrew is a backend engineer at OkCupid, and working on open source software nights and weekends.

About RC: The Recurse Center runs educational programming retreats in New York City. The retreats are free, self-directed, project based, and for anyone who wants to get dramatically better at programming. Learn more at https://www.recurse.com.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Zig does however go a long way towards memory safety, to be fair, and if you wanted to, you could also argue similarly that Rust does not guarantee 100% OOM safety whereas Zig does: https://www.youtube.com/watch?v=Z4oYSByyRak

I would also say that for "very large long lived projects", memory safety is not actually the most important issue, but rather correctness, followed by (in no particular order) safety, performance, explicitness, readability and exhaustive fine-grained error handling, the latter also not found in too many languages besides Zig.

alserio
Isn't memory safety a subset (and I would argue a necessary requisite) of correctness?
jorangreef
Good question.

Assuming we both mean "memory safety" as in a guarantee given by the language (e.g. Rust), then no, logically speaking, it can't be a requisite for correctness, and it's not even a subset of correctness.

Here's why:

If you can write a correct program in a language which does not guarantee memory safety (which we certainly could, for example, simply by not allocating memory at all, or not using pointers etc, or by using runtime checks e.g. to ensure there are no double frees or out of bounds reads/writes), then memory safety is neither a subset of, nor a requisite for correctness.

Memory safety is a double-edged sword. It can make correctness easier to achieve. But that also depends on how the language implements memory safety. If this is done by at the expense of a steeper learning curve, then that could in itself be an argument that the language is less likely to lead towards correctness, as opposed to say an almost memory safe language that implements 80% of this guarantee while optimizing for readability, and with a weekend learning curve.

Historically, the lack of memory safety has obviously been the cause of too many CVEs. But even CVEs in themselves are more a measure of security than correctness. I would say that exhaustive fine-grained error handling checked by the compiler is probably right up there for writing correct programs.

alserio
Thank you for your answer. You can certainly write a memory safe program in a language that doesn't guarantee memory safety. However I still maintain that correctness implies memory safety, at least as a characteristic of the program if not of the language. If your language doesn't help there you should expend more time and effort to achieve that result, and accept higher risk of screwing it up. But I see why you argue that readability correlates with correctness. It's also true that most memory safety issue are really hard to spot, and that may be even more true for 80%-safe language. I really like when the computer does work for me, since as a human I'm way more sloppy.
jorangreef
Yes, I fully agree that a correct program can't contain memory bugs, and that we want the compiler to help us.

For a systems programming language though, I think Zig hits the sweet spot, and not only with regards to memory safety.

Correctness, in this realm, is as much memory safety as:

* error safety (making sure that your program correctly handles all system call errors that could possibly occur, without forgetting any, and there are many! The Zig compiler can actually inspect and check this for you, something not many languages do), see https://www.eecg.utoronto.ca/~yuan/papers/failure_analysis_o... for how critical error handling is in distributed systems,

* OOM safety (making sure your program can actually handle resource allocation failures without crashing),

* explicitness (clear control flow with a minimum of abstractions to make it easy to reason about the code with no hidden surprises, no weird undefined behavior),

* and especially as much runtime safety as you can possibly get from the language when you need to write unsafe code (which you will still need to do when writing systems code, even if your language offers memory safety guarantees, see https://andrewkelley.me/post/unsafe-zig-safer-than-unsafe-ru...). Here, Zig helps you not only at compile time, but also at runtime (and with varying degrees of granularity as you see fit), something not all systems languages will do.

On all these axes, Zig is at least an order of magnitude more likely to lead to a correct program than C, while optimizing for more readable code (even for someone coming from TypeScript) and thus code review, also essential for improving the odds that your code is correct with respect to its requirements.

alserio
That's really quite cool! Thank you again for your analysis and links
> So I assume you could make a standard compliant compiler that doesn't nothing but immediately overflows the stack.

Interesting thought. I imagine just about every language spec (with the possible exception of assembly languages) states something like If memory is exhausted, a handler is executed or If memory is exhausted, the behaviour is undefined, and it's always going to be up to the compiler/interpreter to make reasonable use of the available memory on the specific target platform.

Would a C compiler be non-compliant if it generated code that used 100x the memory that would be used by code generated by typical C compiler? How about 1,000,000,000x so that its generated programs always failed immediately?

Java is an interesting case for this, as it famously doesn't require that a garbage collector needs to be included at all (unlike .Net which does). In Java, not only are you permitted to have a conservative GC, you're permitted to have no GC whatsoever (something OpenJDK now offers [0]). Apparently [1] Java requires that the collector (if it exists) will run before the JVM throws OutOfMemoryError.

I imagine the formal methods folks must have done some thinking on this topic, as their whole field is about doing better than just go ahead and test it out. Could a standards-compliant C compiler used in a safety-critical domain generate code that, only very occasionally, uses a billion times the memory it typically uses?

Somewhat related: one of the motivations behind the Zig language seems to have been a frustration with clumsy handling of memory-exhaustion. [2][3]

[0] https://openjdk.java.net/jeps/318

[1] https://www.kdgregory.com/index.php?page=java.outOfMemory

[2] https://youtu.be/Z4oYSByyRak?t=300

[3] https://news.ycombinator.com/item?id=18422631

eru
> Interesting thought. I imagine just about every language spec (with the possible exception of assembly languages) states something like If memory is exhausted, a handler is executed or If memory is exhausted, the behaviour is undefined, and it's always going to be up to the compiler/interpreter to make reasonable use of the available memory on the specific target platform.

I would have expected some words to those effects, but extensive browsing and grepping in the C++ spec did not reveal such language. (They do not mention the call stack at all, which is a fair enough decision.)

I was first looking into this, because I had hoped the standard would specify a way to check for whether the next call would blow up the stack before you actually engaged in the call.

> Would a C compiler be non-compliant if it generated code that used 100x the memory that would be used by code generated by typical C compiler? How about 1,000,000,000x so that its generated programs always failed immediately?

You could ask the same for speed of execution. I think it would be standards compliant, but not very useful. Standard compliance ain't the only thing people look for in a compiler.

In practice, you could make a C++ compiler that does the standard compliant thing to virtually all of people's programs, by just emitting the nethack executable regardless of input. After all, virtually all real world C++ programs have undefined behaviour somewhere, and undefined behaviour is allowed to 'travel backwards in time'. (Ie undefined behaviour anywhere makes the whole execution undefined, not just after it is encountered.)

Hey, you could even just start nethack instead of producing any files with your compiler at all. Enough undefined behaviour goes all the way to compile time, like eg not closing your quotes, I think.

> I imagine the formal methods folks must have done some thinking on this topic, as their whole field is about doing better than just go ahead and test it out. Could a standards-compliant C compiler used in a safety-critical domain generate code that, only very occasionally, uses a billion times the memory it typically uses?

I think for safety-critical code, you solve this conundrum by relying on extra guarantees that your compiler implementation makes, not just on the standard.

This is the introduction to Zig from the language author that I watched when it was first posted here a while ago:

https://www.youtube.com/watch?v=Z4oYSByyRak

(good watch)

lsh
and some previous hackernews discussions:

https://duckduckgo.com/?q=site%3Anews.ycombinator.com+zig+la...

AndyKelley
Here is a newer talk (April 2019) which is also by me and also an introduction to Zig:

https://www.youtube.com/watch?v=Gv2I7qTux7g

A lot has changed since then, and a lot has changed since that Localhost talk. To catch up:

* (the older talk happened here)

* 0.3.0 release notes - https://ziglang.org/download/0.3.0/release-notes.html

* 0.4.0 release notes - https://ziglang.org/download/0.4.0/release-notes.html

* (the newer talk happened here)

* 0.5.0 release notes - https://ziglang.org/download/0.5.0/release-notes.html

Apr 09, 2019 · lsh on Zig 0.4.0 Released
The Zig language has a great premise, it's worth reading the above link or watching a bit of this video https://youtu.be/Z4oYSByyRak?t=150 just to question our assumptions that certain language features need exist at all.
Feb 11, 2019 · cellularmitosis on Zig Is Great
(and one continuous paragraph harms readability)

edit: here's a great talk on zig from the recurse center: https://www.youtube.com/watch?v=Z4oYSByyRak

Nov 10, 2018 · 218 points, 135 comments · submitted by luu
visualstudio
I'm watching Zig and Jai closely. We need a better C and C++ isn't it. Good luck!
muthdra
I'm betting on Jai.
kungtotte
Really? You're betting on the one language out of all the one's mentioned that you can't actually use yet?
coldtea
Never underestimate the power of hype and marketing over rational assessment.
muthdra
There's no marketing. There's only Jonathan Blow.
kungtotte
Yeah I guess. The only charitable thing I can think to say is that maybe they've tried all the others and found them lacking somehow (what language is perfect, after all?) and so they put their hopes to the one as yet untested.

It could, theoretically, be perfect. Since it's all theoretical at this point I mean.

skrebbel
I hadn't heard of Jai yet, but wow! That's a lot of hype (and even tool support) for something that doesn't work yet.

At first glance, Zig vs Jai reminds me a lot of the Linux vs GNU Hurd thing (or a lot of other "worse is better" examples). A couple of geniuses locking themselves up telling the world "Just wait! It'll be awesome!" seldomly produces something that lasts.

gameswithgo
>seldomly produces something that lasts.

of course, but sometimes it does.

skrebbel
True, but we're casting bets here :-)
electrograv
I honestly think Zig has the potential to be the C/C++ replacement. I haven’t checked out Jai yet but will now that you mention, thanks!

This is somewhat subjective of course, but from what I’ve seen, Zig has just the right set of features to modernize systems programming, without making the language too complex or difficult to write (which arguably Rust’s “borrow checker” system does), and (like Rust) gets rid of some huge legacy language design mistakes most people agree on today (e.g. nullable-by-default pointer types, or no way to know at compile time or at a glance what range of exceptions a function may throw).

And of course, the “automatic” interoperability with C is an essential part of any C/C++ replacement contender.

Jach
There are a lot of contenders. My own unsorted list is Nim, Rust, C++20xx, D, Objective C, Pony, Zig, Crystal, Red, and maybe a form of Lisp (why not Common Lisp). Even more unlikely a maybe and only for the C/C++ trenches of embedded systems, some form of Forth. If Jai ever ships I might consider adding it (at least as a contender to the C/C++ trenches of games and game engines), but it's absurd to think it will have any impact when it can't even be used by anyone other than jblow yet. Even if it ships, I would bet its highest anywhere-realistic impact (which is still damn high) would be to become the PHP of game programming. The mythical C/C++ replacement that everyone will choose when they previously would have chosen C or C++, causing C/C++ to die like COBOL? Much less likely.
nickpsecurity
"maybe a form of Lisp (why not Common Lisp)."

Yes, a Lisp-like language could do it. Why not Common Lisp? Because PreScheme and ZL were both better if we're aiming at C's niche:

https://en.wikipedia.org/wiki/Scheme_48

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.3.40...

http://www.schemeworkshop.org/2011/papers/Atkinson2011.pdf

http://zl-lang.org/

I was looking for a way to do metaprogramming, verified programming, LISP-like programming, and low-level programming. Past few years of searching led me to a lot of verification stuff obviously, PreScheme for low-level Lisp, ZL for Scheme/C, and Red/System for a REBOL-like answer. I was eyeballing Nim, too, since the masses usually hate LISP's syntax. Then, I thought a front-end that let one go with various syntax's, esp inspired by famous languages, with the same underlying semantics or just easy integration.

On a practical note, I noticed that strong integration with the dominant language with little to no performance hit is extremely important. Clojure building on Java in enterprise space is an example. It reuses it's ecosystem. For system space, I started recommending using C's data types and calling conventions where possible in new language so calling it would cost nothing. Then, maybe an option to extract to C for its compilers. So, whatever above languages are created need to integrate with C really well.

electrograv
Many of those are ruled out as modern successors (in my mind, at least), when they continue to make “the billion dollar mistake” (to use its inventor’s own words[1]) of null references.

Rust, Zig, Kotlin, Swift, and many other modern languages can express the same concept of a null reference, but in a fundamentally superior way. In modern languages like these, the compiler will statically guarantee the impossibility of null dereference exceptions, without negatively impacting performance or code style!

But it goes beyond just static checking. It makes coding easier, too: You will never have to wonder whether a function returning a reference might return null on a common failure, vs throw an exception. You’ll never have to wonder if an object reference parameter is optional or not, because this will be explicit in the data type accepter/returned. You’ll never have to wonder if this variable of type T in fact contains a valid T value, or actually is just “null”, because the possible range of values will be encoded in the type system: If it could be null, you’ll know it and so will the compiler. Not only is this better for safety (the compiler won’t let you do the wrong thing), it’s self-documenting.

It blows my mind that any modern language design would willingly think nullable object references is still a good idea (or perhaps its out of ignorance), when there are truly zero-cost solutions to this — in both runtime performance and ease of writing code, as you can see for example from Zig or Kotlin.

[1] https://www.infoq.com/presentations/Null-References-The-Bill...

gameswithgo
> the compiler will statically guarantee the impossibility of null dereference exceptions,

almost every language that gets rid of nulls with something like the Option type will let you still bypass it and get a null reference exception. Rust lets you unwrap, F# lets you bypass it. You could at least enforce a lint that doesn't allow the bypasses in projects where that is desired though.

imtringued
Perfect is the enemy of good. By reducing the possibility of null dereference exceptions from 100% to 10% you have reduced the cognitive burden by 90%. Removing the bypass would result in a 100% reduction in cognitive burden, only 10% more than the second best solution. However handling null cases correctly isn't free either. Especially when you know that a value cannot be "null" under certain conditions which those 10% fall under. In those cases handling the error "correctly" is actually an additional cognitive burden that can ruin the meager 10% gain you have obtained by choosing the perfect solution.
electrograv
> However handling null cases correctly isn't free either. Especially when you know that a value cannot be "null" under certain conditions which those 10% fall under.

While I agree there are rare cases where .unwrap() is the right thing to do, I actually disagree here that it’s anywhere close to 10%: If you want to write a function that accepts only non-null values in Rust, you simply write it as such! In fact, this is the default, and no cognitive burden is necessary: non-nullable T is written simply as “T”. If you have an Option<T> and want to convert it into a T in Rust, you simply use “if let” or “match” control flow statements.

I actually think using .unwrap() in Rust anywhere but in test code or top-level error handling is almost always a mistake, with perhaps 0.001% of exceptions to this rule. I write code that never uses it, except those cases mentioned; while I’ve run into situations where I felt at first .unwrap() was appropriate, I took a step back to think of the bigger picture and so far always find safer solutions to yield a better overall design.

The cognitive burden from Rust comes not from this, but almost entirely from the borrow checker (a completely different toptic), and in some cases, arguably inferior “ergonomics” vs how Zig or Kotlin handle optionals.

For example, in some null-safe languages, you can write:

  if (myObject) { myObject.mehod(); }
And the compiler will understand this is safe. Whereas, in Rust, you must write:

  if let Some(x) = myObject { x.method(); }
This is not even to mention that Rust has no built-in shorthand for Option<T> (some languages write “T?” for example), but I understand why they chose not to build this into the language; rather, Option<T> in Rust is actually a component of the stranded library! In a way, that’s actually quite cool and certainly is by-design; however, it doesn’t change the fact that it’s slightly more verbose.

IMO it’s not a huge deal, but certainly Rust could benefit from some syntax sugar here at least. Either way, both examples here are safe and statically checked by the compiler.

majewsky
> but certainly Rust could benefit from some syntax sugar here at least

It's a tough balance. Rust could benefit from more sugaring, but on the other hand, Rust already has quite a lot of syntax at this point.

gameswithgo
Yeah I think unwrap is best used when experimenting/prototyping, but it can be very very useful there. Imagine trying to get started using Vulkan or Opengl without it. Big mess. But in production code you might want to lint it as a strong warning or error.
electrograv
Yes, but there’s a big difference between the default member access operator crashing conditionally based on null-ness — vs — the same operator guaranteeing deterministic success (thanks to static type checks), with the option to circumvent those safe defaults if the programmer really wants to (in which case they usually must be very explicit about using this discouraged, unsafe behavior).

It may seem to be just semantics, but it’s really quite important that the default (and most concise) way in these languages to read optional values is to check if they’re null/None first in an if statement, after which you can call “object.method()” all you like. It’s important that you can’t just forget this check; it’s essential to using the content of the optional, unless you explicitly type something like “.unwrap()” — in which case there’s almost no chance the programmer won’t know and think about the possibility a crash. Take this in contrast to the chance of crash literally every time you type “->” or “.” in C++, for example.

otabdeveloper2
> Many of those are ruled out as modern successors (in my mind, at least), when they continue to make “the billion dollar mistake” (to use its inventor’s own words[1]) of null references.

Well, you're in luck then! You don't even need a 'modern' successor, C++ (even the ancient versions) disallow null references.

Mesopropithecus
What you mean is that C++ doesn't have a way to (easily) let you check whether a given reference is null or not. int* a = NULL; int& b = *a; compiles and runs just fine.
ethan_g
No the gp is correct, references in c++ can't be null. Your code invoked undefined behavior before you did anything with a reference, namely *a which is a null pointer dereference.
mdpopescu
The "null problem" is that a static language does a run-time check instead of a compile-time check. By the time the undefined behavior is invoked, compilation ended.
coldtea
>Your code invoked undefined behavior before you did anything with a reference

Since nobody stopped you, the problem is still there.

the_why_of_y
> namely *a which is a null pointer dereference.

Which is a textbook example of the null reference problem.

Edit: There may be some terminological confusion here: when programming language folks talk about "references", they include in that definition what C/C++ call "pointers". See for example the Wikipedia article, which gives as the C++ example not C++ references, but C++ pointers.

https://en.wikipedia.org/wiki/Reference_(computer_science)

masklinn
> compiles and runs just fine.

For fairly low values of those. Creating a null reference is UB, your program is not legal at all.

jessaustin
Sure, we're not supposed to do that. Sometimes it happens anyway, and the C++ compiler isn't much help in that case.
coldtea
If the compiler still accepts it, then that it belongs to the "UB" class of code is not much comfort.

The whole point is to NOT have it be accepted.

masklinn
> Well, you're in luck then! You don't even need a 'modern' successor, C++ (even the ancient versions) disallow null references.

That's useful, until you realise that all its smart pointers are semantically nullable (they can all be empty with the same result as a null raw pointer) and then nothing's actually fixed.

Jach
Null isn't that bad -- or rather, the concept of a missing value. Certain languages handle null better than others, but even then, it seems like the more costly mistake has been the accumulation of made-up data to satisfy non-null requirements.[0] More costly for non-programmers who have to deal with the programmers' lazy insistence that not knowing a value for some data in their system is forbidden, anyway.

In any case I think the modern fashion of trying to eliminate null from PLs won't matter much in the effort to replace C, whereas something like a mandatory GC is an instant no-go (though Java at least was very successful at sparing the world a lot of C++). OTOH a language that makes more kinds of formal verification possible (beyond just type theory proofs) might one day replace C and have null-analysis as a subfeature too.

[0] http://john.freml.in/billion-dollar-mistake

gameswithgo
It doesn't seem like you are familiar with how option types get rid of null. You don't have to make up data to satisfy things not being null. You set them none, and the language either forces, or encourages usually, you to always check if the option is none or some.
Jach
I use Option in Java quite a bit because I'm real sick of NPEs and cascading null checks in all-or-nothing flows. I would have preferred Java starting with something like Kotlin's approach where T is T not T|nil. You and the sibling might be missing the point of the post I linked, I think. It can be convenient to have formal assistance via e.g. the type checker that a function taking a non-null String returns a non-null Person with a non-null FirstName and LastName. But in the zeal to be rid of null to make programmers' lives a bit easier, when faced with a name that doesn't compose into 2 parts, someone has to decide what to do about that and who needs to care down the line. You can make up data ("FNU" as in the blog), set a convention (empty string), throw an exception, or declare either the whole Person structure Optional or at least certain fields. If you use a dynamic late-binding language you may have other options. Whatever you do, it ought to be consistent or robustly handled where the references interact with your DB, your processing programs, and your data displays. Finally when these references escape your system, as lots of real world data does, they necessarily escape any static criteria you once had on them, thus it's important to consider those third party systems have to live with your choice. Null is a convenient choice, not something to be villified so casually.
electrograv
I think the author of that blog post fundamentally misunderstands the point: The damage of nullable pointers is not that they are nullable, but that compilers allow you to write code everywhere that assumes they’re not null (in fact, this is the only possible way to code, when the language cannot express the notion of a non-nullable reference!)

For example, most older languages with “the billion dollar mistake” have no complaint whatsoever when your write “object.method();” where it’s unknown at this scope whether “object” is null or not.

The fact that such code compiles is the billion dollar mistake; not the fact that the pointer is nullable.

I don’t care if you want to write nullable references everywhere, or whatever else you prefer or your application demands. That’s fine, so long as:

1. Non-nullable reference types must exist.

2. Nullable references types must exist as statically distinct from #1.

3. The compiler must not let you write code that assumes a nullable reference is not null, unless you check via a control flow statement first.

Now to take a step back, the principle behind this certainly applies beyond just nullability (if that was the point you were trying to make): Generally, dynamic, untyped invalidation states are dangerous/bad, while statically typed invalidation states are ideal. And yes, this does include bad states internal to a non-null reference, just as much as to a null reference.

Sum types are the key to being able to statically declare what range of values a function may return (or accept), and ensure at compile time that these different cases are all accounted for. If you aren’t aware of how elegantly sum types solve this, you should look into it — and I suspect it will be quickly clear why nullable references are useless, outdated, and harmful.

But at the very least, we’ve solved the pain of null dereference — and virtually without compromise. So, it’s irresponsible or ignorant IMO to create a new language that doesn’t include this solution in its core.

loeg
Re: performance claims, I wonder if the C sha256 implementation would have been more competitive with -march=native -mtune=native?
Avamander
Why isn't `-march=native -mtune=native` enabled by-default for every piece of software compiled unless explicitly specified otherwise?
dman
When you are shipping binaries, you usually do not have control over the cpus your users are using. Setting march/mtune to native breaks if the developers machine is noticeably newer than the users machine. Also letting the build machine configuration decide which level of simd to target is too flakey, so most software specifies the target explicitly.
ndesaulniers
Because your cpu has a base ISA, plus extensions. The compiler doesn't know if you're going to only run this binary on your machine, or distribute it to others that may share your base ISA but possibly not your extensions, so it takes the conservative approach and doesn't use them unless signaled to do so via those compiler flags. Also, I think -march implies -mtune.
AndyKelley
In Zig the default target is native, which turns on all the applicable CPU features. If you want to target something other than the native machine you can use --target-os --target-arch parameters. I haven't exposed extra CPU options for cross compiling yet.
loeg
That explains the difference in the SHA256 code case. Case closed ;-).
saagarjha
Do you consider the default architecture targetted by GCC (e.g. some old Intel) to be cross compiling? That is, can I make a binary that supports most x86 processors rather than just those with the particular extensions I support, with the current Zig compiler?
AndyKelley
Yes. You can pass the native OS and native arch as the cross compilation target, and produce a binary that runs on your machine and others that don't have the same extra CPU features.
loeg
You'd have to ask the compiler communities that. As an observer, I notice that C compilers are extremely conservative about changing defaults in new versions. And I could imagine some benefits of that approach, and downsides. But I am not involved in e.g. gcc or clang development.
black-tea
It is if you compile everything yourself on every system you use. Every Gentoo user does it (or at least, the equivalent of it using explicit flags in case they use distcc). But most people don't do that. They use software compiled by other people on very different machines.
jsnell
Compiling with -march=native makes for brittle binaries, since the binary might contain instructions that are not be available on slightly older CPUs or on virtual machines. So it's only suitable for software that's performance sensitive but won't be distributed to other machines (or at least won't be distributed outside of a strictly controlled environment). It wouldn't make for a good default.
tom_mellior
I think it would. I posted some observations on this the last time this video was discussed: https://news.ycombinator.com/item?id=17187140

My hypothesis is that Zig's compiler passes the equivalent of -march=native to the backend, which is why it should also be given to the C compiler to give a fair comparison (and a speedup of 30% or so).

AndyKelley
I think you are probably right. I will do a follow-up post addressing the performance claims in this talk. I think I owe it to the community.
loeg
Much appreciated. Given Zig's design and use of LLVM, for code translated "line by line" (same algorithm) I don't see any reason for Zig's performance not to be substantially identical to C (compiled with Clang).

It wouldn't surprise me if there are reasons Zig can do better or makes it easier to use better algorithms, etc, but you wouldn't expect that to show up in something as computationally straightforward as a cryptographic primitive.

By the way, as a C practitioner, I'm a fan of what you're doing with Zig; keep it up! C++, Go, and Rust don't appeal to me for all of the reasons mentioned in your talk.

mschwaig
The definition of 'perfect software' used in the talk is 'it gives you the correct output for every input in the input domain'. To that end no funny business like hidden allocations or hidden control flow should happen behind your back, because an out of memory error or some exception your code does not deal with explicitly would not be a correct output according to that definition. Of course you do not need that level of control for most projects.
electrograv
While I agree with the content of your post literally, I think we often underestimate the importance of software reliability and performance, and end up giving it less attention than it deserves.

I understand the place for rapid prototyping etc., and that not every software application deals with life-and-death situations — but even those that aren’t, I think our industry suffers a bit here. For example, to this day the Windows 10 start menu refuses to open sometimes (randomly) when I click on it, even multiple times. You could argue that this isn’t a huge deal, because within 30 seconds it usually “fixes itself” (or something like that), but it still doesn’t shake the overall feeling that we’re tolerating way too much shoddy software in 2018 than we should.

Or in terms of performance: I know not every application needs bare-to-the-metal speed, but something feels wrong with the world when my “supercomputer” (compared to a 1990s PC, for example) literally lags when I’m typing or scrolling in some apps, when a 1990s era PC could respond to essentially the same content interaction with almost zero latency.

Some few decades ago, we had far more klunkier programming languages, far slower hardware, and yet somehow yielded better tangible/functional results in some cases. Therefore, I’m very much in favor of anything that moves us towards higher quality software, and Zig (and Rust, and others) are all exciting examples of that.

Scarbutt
Really though, what's up with the windows 10 start menu...
bjourne
It is entirely possible that even if Windows 10 was perfect software according to the "it gives you the correct output for every input in the input domain" definition, it would still not open the start menu every time you clicked on the icon.
dmpk2k
But less likely.

It's clear what his point is.

electrograv
Of course we can redefine bugs as features, but to present that as an argument is a “red herring”: I can assure you, the Windows 10 start menu is not i tended to fail or delay opening instantly upon click or system button press.
bjourne
Why is that a red herring? Isn't it relevant that even if the software is perfect (according to the definition) it might not do what the user wants?
euyyn
In over 6 years working in this industry, in projects with anywhere from dozens of users to millions of users, not a single bug I have encountered was caused by a "hidden allocation" causing the process to go OOM. Not one instance.

One of the two examples he cited in his introduction, the Android one, wasn't even caused by an unhandled OOM error. Android by design will kill unused processes if they're occupying memory that the foreground process needs.

If we want software without bugs, removing hidden allocations in languages is far down in the priority list.

mschwaig
Yes for most userland applications removing hidden allocations is not a concern, which is why for most general purpose programming languages this is way down the priority list.

Most languages make allocations behind your back so that your code can focus on the logic you actually care about since you could not do anything about running ot of memory anyways.

However there are projects where that level of control matters. For those projects C is currently still the default choice, even though it was designed more than 40 years ago. Some choices made back then might be huge liabilities for code we are writing now, because we still need that kind of language. A modern alternative to C could provide a huge value to all of us, mostly through more correct, secure and/or performant software.

theparanoid
The vast majority of bugs aren't are due to corner cases that the programmer didn't think about. Quickcheck and related techniques expose many more bugs than eliminating OOM bugs.
ajennings
I've wanted a language like this. Java's checked exceptions with some way to offload the bookkeeping to the compiler.

What about other run-time exceptions, like divide by zero? Are they checked?

What about Hoare's billion-dollar mistake (null pointer exceptions)? Does Zig have non-nullable references?

None
None
AndyKelley
Zig has a bunch of runtime safety checking. It applies to divide by zero as well as integer overflow, using the wrong union field, and many more. The runtime safety checks are enabled in Debug and ReleaseSafe mode and disabled in ReleaseFast and ReleaseSmall mode. (Actually they are used as assertions to the optimizer so that it can rely on extra stuff being undefined behavior.)

Pointers can not be null. However you can have optional pointers which are guaranteed to use the 0x0 value as null and have the same size as normal pointers.

Tempest1981
Definitely nice to have tools (compiler warnings, static analyzers) that can keep you from hurting yourself.

Of course many projects don’t use them, for various reasons.

I wonder: is turning on full compiler warnings, then fixing them - is it making my software better, or just satisfying some type of neuroticism?

trustmath
It's not neurosis if it has a useful purpose.
porphyrogene
Is it useful if it makes no difference to the user?
muxator
Maybe it will make a big difference once the software is way into its useful life, and it needs to be maintained and evolved.
clarry
There are stupid warnings about nothing to fix, at least in gcc. It's not making the software better.

One warning I recently disabled on $workproject is -Wtrigraphs.

berti
Trigraphs can readily be formed unintentionally, so having a warning when they change the meaning of the program seems valuable. There is a good reason they were removed entirely in C++17.
clarry
They are as good as removed from C.

No compiler I care about has had them enabled by default in more than two decades.

gbuk2013
From the video:

> Documentation is about 80% done

And yet there is not standard library documentation. :(

https://github.com/ziglang/zig/issues/965

throwaway487548
Many very bright people in the major sects of ML and Scheme tried to achieve perfection, and they have concluded, many times, that perfection implies a mostly-functional strongly (and, perhaps, even statically typed but with optional annotations only, like it is in Haskell) language, possibly with uniform pattern-matching, annotated laziness, and high-order channels, and select and receive in the language itself.

Such a language could be visualized as strict-by-default Haskell (with type-classes, uniform pattern-matching, minimalist syntax - everything, except monads) plus ideas from Erlang, Go and Ocaml.

Perfection and imperativeness, it seems, does not match.

vortico
Also, perfection and practicality do not match either, since imperative languages are the most practical for many applications. Pushing software toward perfection gives diminishing returns, and after some threshold, a company will have negative profit due to expensive development costs.
throwaway487548
ML is very practical. The only reasons why it did not became popular (or has not been chosen as, say, the basis for Java) are social rather than technical and actually are insults to intelligence.
None
None
avip
It is actually a nice informative video, but the title is as dumb as dumbness itself. Software should not be perfect. It should be useful. In places where perfection increases usefulness (s.a autopilot), go ahead make it perfect. In most cases, striving for "perfection" is a profound misallocation of resources.
dj-wonk
Upvoted since this is a useful comment and worth mentioning.

I'd expect some downvotes are based on negative reactions to this part of the comment: "the title is as dumb as dumbness itself." (Dear avip: if your comment said "the title is off-base" you would have made your point just as effectively and without the downvotes.)

avip
Thanks, I really appreciate that (the message, not the upvote).
dj-wonk
I'll put it this way: "perfection" is too overloaded of a word to be particularly useful in this context.

I prefer to say it this way: I want software to adhere to a contract. That implies that we want people that use the software to understand that contract. To be more precise, I'd say that:

(1) a good contract defines the scope of correct behavior.

(2) a contract may (or may not) give some bounds (or constraints) about what happens outside of the scope of correct behavior

vortico
Yes, it gives the impression that the author thinks memory allocation errors are the only type of bug. Obviously there are thousands more, so it's kind of odd.
dj-wonk
Well said. There are many kinds of behavior that may be considered "out of specification" or not adhering to a contract. Here are just four:

* https://en.wikipedia.org/wiki/Undefined_behavior

* https://en.wikipedia.org/wiki/Timing_attack

* https://en.wikipedia.org/wiki/Thread_safety

* https://en.wikipedia.org/wiki/Privilege_escalation

Some of these probably don't come to everyone's minds right away. Please share your favorites.

The behavior(s) that a particular language guarantees is a design question. Once those guarantees are specified, we can objectively evaluate a particular language in terms of how well it does according to its own standards.

_cs2017_
Agreed.

However, I interpret the message from the video as "Let's make it really easy to achieve perfection".

In other words if we keep improving development tools and technologies, we may eventually be able to achieve perfection in each individual project nearly for free.

Whether this is realistic or not, I do not know.

tomp
So... the perfect programming language has error-prone manual memory management and rampant undefined behaviour (well at least it can build the code to crash instead of “nasal deamons”)? Yeah right.
None
None
c3534l
No one said the language was perfect. The language is to help you write perfect cod which was immediately defined at the start as code which does not produce errors on any valid inputs.
tomp
Isn’t that by definition? After all, if the code produces an error, then obviously the inputs weren’t valid...
justicezyx
Oh come on, we already know that logical systems cannot even self reconcile it's own promises, as Gödel proved, now someone claim software should be perfect?!

Edit: the original title does not mention perfect.

TaylorAlexander
Yeah. Original title is “Zig: A programming language designed for robustness, optimality, and clarity – Andrew Kelley” and “Software should be perfect” is much more sensational.
luu
If you look at the video, you'll see that "Software Should Be Perfect" is the title slide of the talk (you don't even have to click play, it's there at the start). And then the first words out of the speaker's mouth (other than a sound check) are "I'm going to try to convince all you that software should be perfect".

The "original title" you're referring to is what the person who uploaded the talk to youtube titled the talk, not what the speaker titled the talk.

porphyrogene
Questioning if someone read the article or, in this case, watched the video is against the Hacker News guidelines. If you feel a commenter is not fully engaged with the topic you should decline to continue the thread.
notacoward
Meta-discussion is also against the guidelines. Being a rules lawyer is bad enough; don't be a rules hypocrite.

P.S. Yes, I know I'm also engaged in meta-discussion now. I can afford it.

tom_mellior
The "original title" is also what this video was posted as five months ago and discussed here: https://news.ycombinator.com/item?id=17184407
Ericson2314
Um, Gödel's incompleteness theorem is not a valid reason not to pursue formally verified software.
audunw
Come on, a title like that isn’t meant to be taken too seriously.

And you’re reaching a bit far. You can write a perfect function that adds two 32 bit integers. There is a subset of code that can be written perfectly. Especially if you don’t need Turing completeness to write it.

Zig just tries it’s best to make it easy to write as much as possible of low level code in a perfect way.

jcelerier
> You can write a perfect function that adds two 32 bit integers.

how ? the problem of adding two 32 bits integers is itself imperfect since you may at some point have big integers to sum, so any solution is inherently flawed, too

mathgladiator
Simple, operate in the appropriate mathematical ring. Just because there is overflow doesn't mean it failed.
jfim
> the problem of adding two 32 bits integers is itself imperfect since you may at some point have big integers to sum, so any solution is inherently flawed, too

Smalltalk solves that problem by promoting the result to arbitrary precision arithmetic. It does the same for integer division. For example, 5 / 2 returns a Fraction, not an Integer.

int_19h
The problem can be perfect depending on how it's stated. If you spell out the contract right, then it can be perfect. Some examples:

Given two 32-bit signed integers, produce a 32-bit signed integer that is a sum of the inputs; if the sum doesn't fit into 32 bits, wrap modulo 32 bits.

Given two 32-bit signed integers, produce a 64-bit signed integer that is a sum of the inputs.

Given two 32-bit signed integers, produce either a 32-bit signed integer that is a sum of the inputs if it fits into 32 bits, or an error code indicating that it did not fit.

The problem with languages like C is that they let you be imprecise and get away with in. If you just run with "add two signed 32-bit integers", and implement it as "x + y" in C, it will compile, and it will run, but the result is undefined behavior if it overflows (note: it doesn't even mean that you get the wrong value - it can literally do anything at all). In Zig, if you write it as "x + y", it will panic on overflow, which is at least well-defined and fails fast; and the language provides you with tools to implement any of the three properly defined scenarios above in straightforward ways (i.e. you can add with wraparound, or you can add with an overflow check).

viach
There is a typo in title in word "profit"
revskill
Javascript (programming language) and Browser (runtime environment) is a perfect piece of combination. With JS, you have many choices of implementation. With Browser, runtime error doesn't crash user's device.
pshc
Browsers are far from perfect but their sandboxes are getting pretty good. I think you typoed WebAssembly though?
nerdponx
Same is true for the JVM or Python, no?
Ace17
Yeah. BTW it's also true for native apps running in user-mode.
clockbolder
Uhh.. wait, what?
viraptor
> With Browser, runtime error doesn't crash user's device.

There's a large number of JS bugs and sandbox escapes in all browsers. They certainly can crash the user device.

perlgeek
There's a certain irony that the presenter quickly dismisses exception-based error handling, and then the first example handles an error by printing a message and exiting -- exactly what an unhandled exception does.

This is more than a small piece of irony with a somewhat artificial example. Often, there simply isn't very much you can do with an error. In a SaaS world, you might not be able to tell the user what wrong, in order to not disclose some private information. You can log the error, but only if the network is available, and you can only log it to disk if the disk isn't full.

And then there are cases where you could try to explain the error to the user, but the reasons are very complicated and/or require intimate knowledge of the architecture to make sense, and/or require intimate subject knowledge to understand, or even obscure legal reasons.

coldtea
>There's a certain irony that the presenter quickly dismisses exception-based error handling, and then the first example handles an error by printing a message and exiting -- exactly what an unhandled exception does.

That would indeed be ironic if that case was the only thing exception-based error handling entails. That is, if every program just let all exceptions go uncaught. Which is nowhere near why exception handling was invented, or how it's used in practice.

Ace17
> the only problem with exception-based handling is the lack of explicitness. [...]. Writing "exception safe" code is tricky and non-obvious.

Does it still hold if you assume automatic resource (memory/locks/handles/etc.) freeing? (e.g RAII)

leowoo91
I think it is safe to assume Zig fits into low level applications, staying closer to hardware, so competition should be with Go instead. SaaS could be out of the circle and often web based having less memory utilization.
AndyKelley
I made this argument in the talk: the only problem with exception-based handling is the lack of explicitness. It's too easy to call functions without being aware of the set of possible errors. Many c++ projects disable exceptions entirely. Writing "exception safe" code is tricky and non-obvious. Functions which should be guaranteed to never fail often can throw std::bad_alloc. Try-catch syntax forces incorrect nesting.

Related: Only zig has fast errors with traces. Other languages have no traces or a high performance cost. Look up error return traces in the docs.

wvenable
> It's too easy to call functions without being aware of the set of possible errors.

But that's entirely fine! Programmers are far too obsessed with exactly which functions trigger which errors when it absolutely doesn't matter. All you need to know is what errors you can handle and where in the code you can handle them.

If there is a network exception and can recover and retry it at the start of the operation, it literally doesn't matter which of the thousands of functions up the call stack could have possibly triggered it. The only thing you need to know is the top-level network exception and where your network processing code starts. And if you can't handle a network error, it literally doesn't matter if one was triggered. The best you can do is abort and record a really good stack trace for that type of error.

lazulicurio
This [1] article by Raymond Chen is one of my favorite on exceptions.

I admit there is an elegance to exceptions, but when I'm trying to write reliable code, I find reasoning about exceptions significantly increases my cognitive load. Error handling is a place where a little more verbosity is okay because it keeps my focus on the local context rather than having to consider the entire call stack. I basically agree with the TL;DR of the article: "My point is that exceptions are too hard and I'm not smart enough to handle them".

[1] https://blogs.msdn.microsoft.com/oldnewthing/20050114-00/?p=...

erikpukinskis
This kind of thinking has permeated my coding practice.

I used to think "this is hard, I need to learn about more systems until I'm smart enough to do this."

Now I think "this is hard, there's probably something wrong with the architecture. I'll fix it."

wvenable
That's the mistake with exceptions; if you assume every method can throw any kind of exception it actually reduces your cognitive load. Now you just have to worry about what you can handle in the 2 or 3 places in your code you can actually handle exceptions. Instead of infinite number of places an error can occur and the huge number of possible errors.

The idea that error states are perfectly knowable everywhere in the code is a myth. Even if that were possible at one moment, the instant anyone changes code anywhere it will immediately be wrong.

lazulicurio
> Now you just have to worry about what you can handle in the 2 or 3 places in your code you can actually handle exceptions

This is only true if your application has no state or invariants that could possibly be invalidated in the face of exceptions. For instance, what would be your solution to the 'NotifyIcon' example given in the linked article?

> The idea that error states are perfectly knowable everywhere in the code is a myth. Even if that were possible at one moment, the instant anyone changes code anywhere it will immediately be wrong

This applies equally to both error codes and exceptions. If a method N layers down in your call stack changes its behavior, that's a potential breaking change regardless of your choice of error handling.

wvenable
> For instance, what would be your solution to the 'NotifyIcon' example given in the linked article?

The notify icon code is poorly structured to begin with. Simply creating a NotifyIcon object adds it to the UI? That's an awful design. If there was an add to UI step then it would be a non-issue; the half-constructed NotifyIcon object would never get added to the UI. This issue is not magically resolved by having to explicitly handle every error; you can make the same mistake with twice as much code.

> This applies equally to both error codes and exceptions. If a method N layers down in your call stack changes its behavior, that's a potential breaking change regardless of your choice of error handling.

I'm not talking about changing behavior, I'm talking about changing implementation. Behavior is part of the contract. But being able to safely change implementation is the fundamental principle of abstraction and is the basis for polyphorphism. If a method today does a calculation using a database but tomorrow is refactored to use a webservice -- as long as the contract/behavior is unchanged -- then the rest of the code shouldn't have to know about it.

IanCal
> All you need to know is what errors you can handle and where in the code you can handle them.

Surely also what errors can occur. Can the network using library throw disk IO errors? Permissions errors? Maybe it handles all the network errors internally to the library and I don't need to deal with those at all.

wvenable
How do you handle those other errors though? If there is a disk I/O error, you're basically done. Permission errors, same thing. You report, abort, maybe retry.

You don't really need to know in the specific what kinds of errors can occur. If it's possible to recover from an exceptional situation, it's only useful to know if that situation is possible so you can avoid writing code you don't need to. But there wouldn't be any harm in writing an exception handler for an exception that can't happen except for that wasted effort.

IanCal
Well it depends on what I'm doing and why the errors might be thrown right? Is it something I can let the user retry if they know what's happening (e.g. IO error because the output folder doesn't exist)? Whether I retry on network errors can depend on what the error is - if the end service is responding saying my call is invalid for certain reasons there can be a good case to just die immediately rather than slowly backing off trying repeatedly in vain.

The flip side of this is I shouldn't need to worry about exceptions that cannot be thrown. When you say all you need to know is what you can handle and where, that list must be a subset of the possible list of things that can be thrown. There's no point worrying about whether I should be retrying something due to network faults if it never uses the network to begin with.

wvenable
I think of this way; there are broad categories of exceptions that you can handle and specific exceptions. But those exceptions are significantly smaller than the set of all possible exceptions my code (and the framework code) can trigger. I shouldn't have to worry about every possible exception, just ones I can handle. Checked exceptions/errors means you have to deal with the minutiae.

For your example if retrying network, I prefer to simply have a "ShouldRetry" property/interface on the exception itself since the triggering code has the best knowledge of how it should be handled. No need to know every possible network exception and sort them into retry or not retry.

> Is it something I can let the user retry if they know what's happening (e.g. IO error because the output folder doesn't exist)?

My favorite error handling is when you can just put a single handler at the event-loop of a UI based project. On exception you just show them the message and continue running. The stack unwinding code ensures the application maintains correct state and that the operation is unwound. If the user clicked "save" and a failure occurred they get the message and can retry if they want.

platz
> the only problem with exception-based handling is the lack of explicitness - It's too easy to call functions without being aware of the set of possible errors.

i.e. checked-exceptions?

marmaduke
yep! there was a sort of epiphany for me, the first I wrote Java, from Python, when I realized that I can be sure that my code could handle anything the callee could throw at it.
sqrt17
There are still plenty of exceptions that are unchecked, i.e. subtypes of RuntimeException, which do not have to be declared on the callee. As well as people who think it's a good idea to throw instances of Exception and just tack a "throws Exception" on their methods.
loeg
Right — that's a failing of Java the language; not the concept of checked exceptions. At least one problem with Java's exception model that Zig does not have is that some exceptions are "unchecked."
srtjstjsj
Zig has system-exit, which is strictly less useful than unchecked exceptions, so Zig certainly has the same problem.
Jach
The 'failing' of checked exceptions comes from essentially being forced to couple the type signatures of your methods' successful results with their error results. A more explicit way to do this is with Optional/Either types, and now you don't need the checked exceptions feature nor need to get people to remember to check for a global errno or some other data convention like an empty string / null. There's a lot of boilerplate though, just like with checked exceptions.

I prefer a more decoupled/late-binding approach to error handling; so far Common Lisp does it the best I've seen.[0] The key insight CL people had was that often in the face of an error, you need help from somewhere away from your local context to figure out how to resolve it, but then for many cases, you want to return back to where the error has occurred and take the resolution path with your local context. In other languages that automatically unwind the stack to the exception handler, it's too late.

[0] http://www.nhplace.com/kent/Papers/Condition-Handling-2001.h...

coldtea
Or just a compiler / IDE warning saying "X, and Y exceptions could be thrown here but are unhandled".

Then it's "checked" but catching is optional.

gameswithgo
Yeah there are a lot of interesting things you can do if you design languages around an editor. Like complete, recursive type inference, so you don't have to annotate types on function signatures, but the editor can display them, which is very useful, or showing what exception can be thrown even if the language doesn't make it explicit. This works out great when the editor is available on the platform you need and working.

F# is a language this is a lot like this, and I've recently been unable to get the editor with these features working in Linux, and it makes it rather horrible. If you had to remote in to a server and use vim, it could be rather horrible, and so on.

If we can get something like the language server idea working, really well, on all platforms and supported by all editors, then designing languages around certain IDE assumptions would more often be a good idea I guess.

coldtea
That's all very true, and it's sad that we haven't explored this space more.

It's a nice in-between something like a Smalltalk image-environment-IDE / Lisp Machine and a dumb IDE/editor that starts from source code and has to parse into AST into its own...

platz
https://twitter.com/TeaDrivenDev/status/1054177017351024641
int_19h
Once you have checked exceptions, you're basically as verbose as the Error monad anyway. Except that instead of properly encoding it in the function result type, you have a separate mechanism, which is kinda sorta but not quite like a type. In Java, this manifests itself in things such as it being impossible to write a higher-order function that has a signature of "takes some function F, and can throw everything that F can throw, plus some E".
bluejekyll
> Related: Only zig has fast errors with traces. Other languages have no traces or a high performance cost. Look up error return traces in the docs.

Out of curiosity, have you compared this to Rust’s backtrace mechanism? I’d be interested in the perf difference, if it’s available.

dbaupp
Rust's panics and their backtrackes are essentially the same as C++ exceptions.
bluejekyll
I wasn’t actually thinking about panic, but the associated optional backtrace in Error types with Failure, for example: https://docs.rs/failure/0.1.3/failure/
dbaupp
Ah, I see.

Looking at the source code https://docs.rs/crate/failure/0.1.3/source/src/backtrace/int..., "failure" uses https://docs.rs/backtrace/0.3.9/backtrace/struct.Backtrace.h... which is the callstack at a single point, not a trace of how an error was propagated.

dbaupp
Link for convenience: https://ziglang.org/documentation/master/#Error-Return-Trace...
tinus_hn
Many languages have the possible exceptions as part of the function signature so you can't call a function without either handling, converting or passing any exceptions you may receive. It's a drag though.
Ace17
> the only problem with exception-based handling is the lack of explicitness. It's too easy to call functions without being aware of the set of possible errors.

That's the whole point of exceptions.

gameswithgo
there are a lot of points in language design that people don't all agree on, or that might be appropriate for one kind of work and not for another.
otabdeveloper2
> It's too easy to call functions without being aware of the set of possible errors.

Yes, and that's a very good thing.

> Many c++ projects disable exceptions entirely.

Yes, and they're objectively wrong.

> Writing "exception safe" code is tricky and non-obvious.

The word 'exception' here is redundant.

tjoff
> The word 'exception' here is redundant.

Exceptions make it much harder.

christophilus
Hey. Thanks for making Zig. I’m glad it’s moving forward.

Don’t let all the negative comments get you down. It happens to every language author. (See some of the nasty comments on Elm, Clojure, Go, Jai, etc.) I, for one, am happy that all of these languages exist, even those that don’t scratch my itches.

electrograv
Completely agree — Zig and it’s design philosophy is absolutely fantastic for its domin, and I’m very glad it exists.

I like Rust and other modern languages too, but Zig strikes me as just about the best possible contender for a true C/C++ replacement (due to seamless interop, and a very practical and well designed simple language core — as opposed to the complexity of something like Rust, for example).

1. Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible? - James Mickens https://www.youtube.com/watch?v=ajGX7odA87k

2. James Mickens on JavaScript - James Mickens https://www.youtube.com/watch?v=D5xh0ZIEUOE

3. Creating containers From Scratch - Liz Rice https://www.youtube.com/watch?v=8fi7uSYlOdc

4. 2013 Isaac Asimov Memorial Debate: The Existence of Nothing - Panelists: J. Richard Gott, Jim Holt, Lawrence Krauss, Charles Seife, Eve Silverstein. Moderator: Neil deGrasse Tyson https://www.youtube.com/watch?v=1OLz6uUuMp8

5. 2016 Isaac Asimov Memorial Debate: Is the Universe a Simulation? - Panelists: David Chalmers, Zohreh Davoudi, James Gates, Lisa Randall, Max Tegmark Moderator: Neil deGrasse Tyson https://www.youtube.com/watch?v=wgSZA3NPpBs

6. Zig: A programming language designed for robustness, optimality, and clarity – Andrew Kelley https://www.youtube.com/watch?v=Z4oYSByyRak

7. Concurrency Is Not Parallelism - Rob Pike https://www.youtube.com/watch?v=cN_DpYBzKso

winkeltripel
> 3. Creating containers From Scratch

I read that as "creating containers IN Scratch (a visual game programming language from MiT)." It wasn't as impressive as my expectations, only because my expectations were so very high.

derangedHorse
James Mickens on JavaScript is hilarious! I've been a huge fan of his ever since I stumbled onto one of his AMAs on Reddit.
themoat
I just watched the first video. That was so good!
hessenwolf
What’s the message?
soobrosa
https://medium.com/@soobrosa/my-humble-james-mickens-shrine-...
Sep 28, 2018 · MaxBarraclough on Zig 0.3.0 Released
I don't know of an 'objective' thorough evaluation of Zig as it stands, but I hope these are of interest:

Previous HackerNews discussion on Zig: https://news.ycombinator.com/item?id=17184407

Also, perhaps not as objective as you'd like but still good resources:

Whirlwind tour of the basic thinking behind Zig: https://andrewkelley.me/post/intro-to-zig.html

A ~50 minute talk on Zig, by its creator: https://www.youtube.com/watch?v=Z4oYSByyRak

> I think there's room for other languages to occupy a similar space but they're need to focus on no-std-lib no-runtime operation (not always the sexiest target).

You're describing the Zig language!

It aims to be as fast as C, and unlike most languages that say this is a goal, it means it. There's no "almost as fast as C if you ignore the garbage-collector and the array-bounds-checking", it's actually as fast as C.

Its author values the minimalism of C, but wants to create a far better language for doing that sort of work. [0]

They're doing surprisingly well. They even managed to make a faster-than-C (faster even than hand-tuned assembly) SHA-2 [1]

[0] https://andrewkelley.me/post/intro-to-zig.html , https://github.com/ziglang/zig/wiki/Why-Zig-When-There-is-Al...

[1] https://ziglang.org/download/0.2.0/release-notes.html , it's also mentioned somewhere in this talk https://youtu.be/Z4oYSByyRak

AnIdiotOnTheNet
It should be noted that 0.2.0 is significantly different than master in regards to pointer syntax due to pointer reform [0]. There are a lot of other changes too, but that one's the most noticeable.

[0] https://github.com/ziglang/zig/issues/770

mcguire
Does Zig have a memory management story other than C's? Anything like Rust's borrow checker?
kbenson
> faster even than hand-tuned assembly

I think you and I are working off different definitions of what that means, as my definition doesn't really allow for a faster implementation (unless there's some really spooky stuff going on in the compiler). I suspect you mean faster than a popular hand tuned implementation.

pests
I don't believe it is possible to hand tune every program to beat a compiler.
jeremiep
When you realize the compiler's optimizations only account for about 10% of the total program's performance you find that the other 90% is entirely up to the programmer.

Architecture, data structures, batching operations, memory locality, and a bunch of other metrics are all concepts the compiler can't really help you with whatsoever and they have a much larger impact on performance than the 10% the compiler is actually able to optimize.

The problem is that either programmers don't care, or they can't make the distinction between premature optimizations and architecture planning.

emodendroket
In a sense you're right, but hand-tuning assembly is kind of an orthogonal problem to determining whether you're using the right algorithms and data structures.
steveklabnik
Where do you get that 90/10 split from? Just curious.
None
None
jeremiep
Talks from Mike Acton and Scott Meyers, specifically "Data-Driven Development" and "CPU Caches and why you should care" respectively.

I forgot exactly where I got that number, but it's been a pretty good metric so far.

In a nutshell; the compiler is great a micro-optimizations and absolutely terrible at macro-optimizations. The former will get you a few percent of perf boosts while the later usually results in orders of magnitudes of performance gains.

Its near impossible to apply macro-optimizations at the end of a project without massive refactors.

steveklabnik
Cool, I'll check those out, thank you!
Narishma
I believe the 10/90 number may be from this talk, though I don't have time to rewatch it to confirm: https://www.youtube.com/watch?v=rX0ItVEVjHc
MaxBarraclough
You're right to emphasise good data-structures and algorithms (also concurrency, parallelism, etc), but compiler optimisation is nothing to sneeze at. '10%' is laughably off-base.

From a quick google: compiler optimisation can accelerate CPU-bound code to over 5x the unoptimised performance. https://www.phoronix.com/scan.php?page=article&item=clang-gc...

jeremiep
They seem to be benchmarking very specific things and not actual applications. These numbers do not hold in the real world.
kbenson
But it should be possible to hand tune to match a compiler, right? And that would mean the compiler was not faster, which is the general claim I was calling into question.
mhh__
How so? Compilers aren't magic, you can - by definition (Assuming the current state machine style (no AI or nuthin') of compilers ) - you can just manually compile it.

Compilers are clever, but humans can also be really clever: They know how to think - You can change the algorithm entirely.

For practical reasons however (Premature optimisation also), trust your compiler.

MaxBarraclough
More a question of practicality, no?

Programmers are creative, compilers are well-tuned machines. Sometimes it's tough to beat a compiler's code, but I'm not sure that 'impossible' is meaningful.

Monkeys with typewriters could function as a highly parallel superoptimiser, after all.

MaxBarraclough
Obviously Zig's output (courtesy of its LLVM backend) can be expressed and distributed as assembly code, but it's not hand-tuned, it's compiler-generated.

Whether the hand-tuned assembly code they used was the fastest out there, I'm not sure. I'd hope so, or they're being misleading.

I might get time to re-watch the relevant part of the video - it's around the 21 minutes mark - https://youtu.be/Z4oYSByyRak?t=1292

isaachier
Andrew explains in the video that the speedup is mostly due to compile-time speed ups and potentially the use of the rorx instruction. It cannot be faster than the fastest assembly because the by definition is the fastest assembly.
MaxBarraclough
> compile-time speed ups and potentially the use of the rorx instruction

Indeed, but the 'compile-time speedups' are a legitimate point in favour of the compiler. If they didn't occur to the assembly programmer, or struck them as too complex to pull off, then the compiler deserves the point.

Have to say I don't follow why the hand-tuned assembly doesn't use the rorx instruction. It's not mentioned in the assembly file, at least, but I thought that was the point? https://www.nayuki.io/page/fast-sha2-hashes-in-x86-assembly

Also, it's neat to see instruction-selection being so significant. Generally might expect cache/branch behaviour to be the kicker.

> It cannot be faster than the fastest assembly

Well, 'fastest assembly' is the domain of so-called 'superoptimisers', and has us pushing at the stubborn bounds of computability.

We were talking about hand-written assembly-code, compared to compiler-generated. Odds are that none of the binaries tested were the optimal assembly.

The only interesting question is whether the hand-tuned assembly code they tested, was the fastest available at the time. If not, the whole demonstration is a straw-man, of course.

Also I don't like that the winning Zig program runs for so much longer. A good benchmark should provide ironclad assurances that no candidate is getting an unfair advantage re. loading/'warm-up'.

kbenson
My point is that all the fastest hand-tuned assembly has to do to match the compiled output is to do the same thing. The compiler has no ability to do anything a hand-tuned assembly cannot, by definition of how this all works.

So, saying it's faster than hand-tuned assembly doesn't really make sense, but saying it's faster than current optimal hand-tuned assembly could, but that also probably requires searching for a not-very-well tuned problem, as any optimization the compiler could apply is something that could be applied by hand in the hand-tuned assembly (even if it requires a lot of work to do so).

A general statement that might be made (and would also be very impressive) would be that it compiles to implementations that are on par with the current best hand-tuned assembly versions.

The small edge case where the original statement might be true is where some AI/ML/evolutionary techniques are applied and the assembly is faster in the generated output but we don't know why. That is, the speedups are not because of rules and heuristics, but derived by some complex process which we have little insight into, and thus can't manually replicate (which is what I meant by spooky stuff).

emodendroket
I mean, there's nothing a computer does playing chess that a human couldn't, and yet the world's best players simply can't beat the world's best chess-playing programs.
kbenson
This assumes that the problem of a chess game and providing optimal solutions for an algorithm defined in a higher level language (or pseudocode) are similar enough that the same strengths apply. They may be, but I don't think that's something you can assume without some sort of evidence.

Though, I guess the same could be said of my prior statements. To my knowledge, compilers aren't generally looking through a search space for optimized solutions as many common game AI algorithms do (which would would be in line with my statements about AI/ML/etc above). I don't have the relevant citations or experience to state that with authority though.

Looking for more on Zig I found a talk [0] (~1 hour) on the language by Andrew Kelley from just a few months ago.

Edit: I see there already questions about Rust in this thread. There's an audience question on Rust in the talk [1] and the creator states that Rust is Zig's biggest competitor.

[0] https://www.youtube.com/watch?v=Z4oYSByyRak

[1] https://www.youtube.com/watch?v=Z4oYSByyRak&feature=youtu.be...

throwawayjava
IMO we're about to see a robotics renaissance.

That embedded world needs a simple, performant, low-level language that isn't C. If Zig focuses on providing that for the robotics domain, I could see it becoming popular.

ionforce
What makes you think we're about to see a robotics renaissance?
stevehawk
Michael Bay's documentary Transformers
May 30, 2018 · 136 points, 70 comments · submitted by espeed
Yoric
I have read quickly through the tutorial, and this looks interesting. Objectives are similar to Rust, with a few twists.

A few differences that I can see:

- At first glance, Rust's `enum` looks safer and more powerful than Zig's `union` + `enum`, while Zig's `union` + `enum` appears more interoperable with C.

- Zig's `comptime` is quite intriguing. In particular, types are (compile-time) values and can be introspected.

- Zig's generics are very different from Rust's generics. No idea how to compare them.

- In particular, Zig's `printf` uses `comptime`, while Rust's `print!` is a macro.

- Zig's Nullable types/Result types look bolted in and much weaker than Rust's userland implementation.

- I don't see closures in Zig.

- I don't see traits in Zig.

- I don't see smart pointers in Zig, and more generally, I have no idea how to deallocate memory in Zig.

- Zig's memory management encourages you to check whether your allocations have succeeded, while Rust's out-of-the-box memory management assumes that allocations always succeed - if you wish to handle OOM, you'll need a "let it fail" approach.

- Zig's alias checker seems to be much more lenient than Rust's.

- I don't see anything on concurrency in Zig's documentation.

- Most of the Zig examples I see seem to fall in the "unsafe" domain of Rust by default. For instance, uninitialized memory or pointer casts seem to be ok in Zig (if explicitly mentioned), while they must be labelled as `unsafe` to be used in Rust.

- According to https://andrewkelley.me/post/unsafe-zig-safer-than-unsafe-ru..., Zig performs some checks that Rust does not perform in an `unsafe` block.

- Zig supports varargs, Rust doesn't (yet).

For the moment, I'll keep coding in Rust, but I'll keep an eye on Zig :)

tiehuis
To elaborate on a few of your remarks/questions.

> At first glance, Rust's `enum` looks safer and more powerful than Zig's `union` + `enum`, while Zig's `union` + `enum` appears more interoperable with C.

A `union(TagType)` in Zig is a tagged union and has safety checks on all accesses in debug mode. It is directly comparable to a Rust enum. Any differences are probably more down to the ways you are expected to access them and Rust's stronger pattern matching probably helps some here.

> I don't see traits in Zig.

Nothing of the sort just yet although it is an open question [1]. Currently std uses function pointers a lot for interface-like code, and relies on some minimal boilerplate to be done by the implementor.

See the interface for a memory allocator here [2]. An implementation of an allocator is given here [3] and needs to get the parent pointer (field) in order to provide the implementation.

It isn't too bad once you are familiar with the pattern, but it's also not ideal, and I would like to see this improved.

> I don't see smart pointers in Zig, and more generally, I have no idea how to deallocate memory in Zig.

You would use a memory allocator as mentioned above and use the `create` and `destroy` functions for a single item, or `alloc` and `free` for an array of items. Memory allocation/deallocation doesn't exist at the language level.

> I don't see anything on concurrency in Zig's documentation.

There are coroutines built in to the language [4]. This is fairly recent and there isn't much documentation just yet unfortunately. Preliminary thread support is in the stdlib. I know Andrew wants to write an async web-server example set up multiplexing coroutines onto a thread-pool, as an example.

> Zig supports varargs, Rust doesn't (yet).

It's likely that varargs are instead replaced with tuples as a comptime tuple (length-variable) conveys the same information. I believe this fixes a few other quirks around varargs (such as using not being able to use varargs functions at comptime).

[1] https://github.com/ziglang/zig/issues/130

[2] https://github.com/ziglang/zig/blob/15302e84a45a04cfe94a8842...

[3] https://github.com/ziglang/zig/blob/15302e84a45a04cfe94a8842...

[4] https://github.com/ziglang/zig/blob/15302e84a45a04cfe94a8842...

None
None
Yoric
> A `union(TagType)` in Zig is a tagged union and has safety checks on all accesses in debug mode. It is directly comparable to a Rust enum. Any differences are probably more down to the ways you are expected to access them and Rust's stronger pattern matching probably helps some here.

But if I understand correctly, out-of-the-box, Zig's `union` doesn't get a tag type, right? That's what I meant by Rust's `enum` being safer: you can use it safely in Zig, but you have to actually request safety, because that's not the default behavior.

I probably should have phrased it differently, though.

And the "more powerful" is about the fact that a Rust enum can actually carry data, while it doesn't seem to be the case with Zig.

>> I don't see smart pointers in Zig, and more generally, I have no idea how to deallocate memory in Zig. > >You would use a memory allocator as mentioned above and use the `create` and `destroy` functions for a single item, or `alloc` and `free` for an array of items. Memory allocation/deallocation doesn't exist at the language level.

So, it sounds like deallocations are not checked by default, right?

> I know Andrew wants to write an async web-server example set up multiplexing coroutines onto a thread-pool, as an example.

That would be a nice example, for sure!

None
None
tiehuis
> But if I understand correctly, out-of-the-box, Zig's `union` doesn't get a tag type, right? That's what I meant by Rust's `enum` being safer: you can use it safely in Zig, but you have to actually request safety, because that's not the default behavior.

Sure. I think that's more an effect of the choice of keyword defaults here. A straight union is very uncommon and is typically solely for C interoperability.

> And the "more powerful" is about the fact that a Rust enum can actually carry data, while it doesn't seem to be the case with Zig.

A tagged union can store data as in Rust. See the examples in the documentation [1]. Admittedly Rust's pattern matching is nicer to work with here.

To summarise the concepts:

- `enum` is a straight enumeration with no payload. The backing tag type can be specified (e.g. enum(u2)).

- `union` is an unchecked sum type, similar to a c union without a tag field.

- `union(TagType)` is a sum type with a tag field, analagous to a Rust enum . A `union(enum)` is simply shorthand to infer the underlying TagType.

> So, it sounds like deallocations are not checked by default, right?

If referring to if objects are guaranteed to be deallocated when out of scope then no, this isn't checked. There are a few active issues regarding some improvements to resource management but it probably won't result in any automatic RAII-like functionality. This is a manual step using defer right now.

[1] https://ziglang.org/documentation/master/#union

AnIdiotOnTheNet
> - `union` is an unchecked sum type, similar to a c union without a tag field.

My understanding was that Zig unions are tagged, but if you don't explicitly specify the tag type the compiler will choose for you. Indeed, the first example in the documentation suggests normal unions are tagged:

  // A union has only 1 active field at a time.
  const Payload = union {
      Int: i64,
      Float: f64,
      Bool: bool,
  };
  test "simple union" {
      var payload = Payload {.Int = 1234};
      // payload.Float = 12.34; // ERROR! field not active
      assert(payload.Int == 1234);
      // You can activate another field by assigning the entire union.
      payload = Payload {.Float = 12.34};
      assert(payload.Float == 12.34);
  }
Or do straight unions only get a hidden tag for debug/safe builds? My memory is a little fuzzy.
Yoric
> Sure. I think that's more an effect of the choice of keyword defaults here. A straight union is very uncommon and is typically solely for C interoperability.

Fair enough. That's why Rust also has a `union` keyword, which is always `unsafe`.

> A tagged union can store data as in Rust. See the examples in the documentation [1]. Admittedly Rust's pattern matching is nicer to work with here.

Ah, right, it could be any struct instead of being a bool or integer. I missed that.

> If referring to if objects are guaranteed to be deallocated when out of scope then no, this isn't checked.

I was wondering about that and double-deallocations.

> There are a few active issues regarding some improvements to resource management but it probably won't result in any automatic RAII-like functionality.

Out of curiosity, what kind of improvements?

mastax
Rust does support varargs in extern functions (FWIW): https://play.rust-lang.org/?gist=92fbdf9bdc95c09d16e30c03ffa...
Yoric
Ah, right.
wgjordan
This programming setup is the bomb.

Zig is designed not only for robustness, optimality, and clarity, but also for great justice.

It's great that Zig is an open source project so all your codebase belongs to us.

I hope 'Zig' takes off everywhere.

gavanwoolery
From the slides, these bullets struck me as things I have wanted for a while (to the extent that I have my own toy language that addresses some of them):

- compiles faster than C

- Produces faster machine code than C

- Seamless interaction with C libs

- robust/ergonomic error handling

- Compile-time code execution and reflection

- No hidden control flow

- No hidden memory allocations

- Ships with build system

- Out-of-the-box cross compilation

Yoric
> - No hidden control flow

That means no destructors, right?

edit Ah, if I read the documentation correctly, there are destructors.

nickez
I think it means no exceptions
iainmerrick
I think it means no destructors, no overloaded operators, no automatic getter/setters for properties. Nothing that does a function call unless you can immediately tell by looking at it that it’s a function call.
MaxBarraclough
You're right - there's no proper RAII (but there's a 'defer' keyword, which runs your expression on scope-exit, like BOOST_SCOPE_EXIT), no operator overloading, no exceptions, and all function calls look like function calls. It does have a language feature for handling error-codes though. [0] [1] [2]

Contrast with C#'s 'properties', where a method call (which might throw) is disguised as reading/writing a member.

My favourite example of unexpected semantics is D's lazy keyword, where, at the call-site, you have no idea whether your argument will be evaluated lazily or eagerly! [3]

C# has a pass-by-reference keyword which modified the way an argument is treated, but it has the sense to force use of the keyword at the call-site too, so that everything is clear. [4]

I like the language's philosophy, I'll have to keep an eye on it. I suspect they'd do well to have the language compile to C, though. Is there any reason that wouldn't be a good fit? I see it has a templates system, but at (very) first glance I don't see anything that wouldn't map cleanly to C.

[0] https://ziglang.org/documentation/master/#defer [1] https://ziglang.org/download/0.1.1/release-notes.html [2] https://andrewkelley.me/post/intro-to-zig.html [3] https://dlang.org/articles/lazy-evaluation.html [4] https://docs.microsoft.com/en-us/dotnet/csharp/language-refe...

audunw
I experimented with Zig for embedded programming. The language is very promising for this application. It still needs some features and maturity, but it's getting there. The cross-platform support is also has some missing stuff and bugs, but many of those were actually in LLVM in my case.

To me, Zig is the most promising replacement to C right now. It seems to do everything right.

Rust is of course another candidate, with a different philosophy, but I don't consider it a pure C replacement due to core language features requiring an advanced compiler. Zig, for the moment, should be pretty simple to write a compiler for. I consider this a strength of C, that something like TCC can exist.

tomsmeding
> So let's talk about memory. I know it's every programmer's least favourite topic

I thought that was time zones? (https://news.ycombinator.com/item?id=17181046)

qop
What the hell are we going to do when humans are multiplanetary? Keeping computers in track of time is already a mess and we aren't even on mars yet.

I've worked in some dirty gross code, but messing around with time is one of things I can just live without.

jokoon
Every time there is news about a language, the first thing I am interested in is reading the syntax, I don't really care about anything else.

Unfortunately it's often hard to find code examples.

Zig doesn't fail that rule, I browsed 3 or 4 pages or so on the website, and could not find decent code examples.

It's frustrating. You should be proud of the syntax choices you're making and it should be the first thing you see so that developer might get a little interested, look at rust and go and nim.

Yoric
I just clicked on "documentation" and found examples:

https://ziglang.org/documentation/master/

zshrdlu
He's wrong about Lisp not being capable of handling heap exhaustion through "exceptions" (conditions in Common Lisp).
jokoon
I've been advised to a write a preprocessor so that I can make a language that compiles to C. I'm mostly fine with C, except I want to make it more readable and with a 'sweeter' syntax.

Features I want are: * pythonic indent * no trailing semicolon * vector, hashmap and map containers * immutable strings

citycide
Have you tried Nim? It has (almost?) everything you're looking for and compiles to C, C++ or JS.

https://nim-lang.org

klibertp
> It has (almost?)

No, it has all the features mentioned in GP. It's worth noting, though, that Nim is a higher-level language than Zig appears to be. Nim is garbage-collected, has an object system with multi-methods, has proper lambdas and higher-order functions, uses a kind of uniform access principle (`foo.func()` is the same as `func(foo)`, basically), has destructors and defer blocks, exceptions, iterators, generics, operator overloading, AST based (but procedural) macros and templates, a kind of type-classes (called concepts), built-in concurrency (thread pool) support, and more.

I'm not sure how well it would work on microcontroller, for example, although its garbage-collector is tunable in terms of memory and times constraints. But for anything higher-level than that, Nim is a really nice language, which reads very similar to Python but is natively compiled and much faster (among other features). A quick example to back up the similarity claim:

    proc getMem() : tuple[total: int, free: int] =
      let
        (output, _) = execCmdEx("free")
        fields = output
          .split("\n")[1]
          .split()
          .filterIt(it != "")
      return (fields[1].parseInt(), fields[^1].parseInt())
Really worth taking a look at, if you want conciseness and performance without compromising readability.
nimmer
It works on microcontrollers just fine using the new GC or by disabling it.
chriswarbo
> Features I want are: * pythonic indent * no trailing semicolon

I would recommend against making up a completely new syntax, since there would be no support in any existing tools (parsers, linters, IDEs, formatters, ...). Instead, I'd suggest picking a general-purpose syntax with the properties you want (indentation for grouping, newlines for sequencing). One excellent example is called "sweet-expressions", which was originally invented for Scheme but can actually be used to represent any structured data (especially code):

https://srfi.schemers.org/srfi-110

You can use tools like "sweeten" and "unsweeten" from https://sourceforge.net/projects/readable/files to convert between sweet-expressions and s-expressions. S-expressions are just raw syntax trees, so "unsweeten" is basically a ready-made parser which you can pipe into any other program you like, e.g. linters, macro expanders, pretty-printers, etc.

To make your language compile to C, you would need some way to "render" those syntax trees into C code. There are existing tools to do this, for example the C-Mera system can read in C syntax trees and write out C code https://github.com/kiselgra/c-mera

Your preprocessor could then be as simple as `unsweeten | cm c`

To add completely new features like immutable strings you would just need to define some representation for them in the syntax trees (or, if you want to make them the default for "double quotes" you should define an alternative representation for non-immutable strings), then write some find/replace macros to convert occurrences of this representation into an equivalent (but more verbose and elaborate) set of C syntax trees. Stick this find/replace step in between `unsweeten` and `cm` and you've got brand new syntax for very little work.

white-flame
His distinction with hidden memory allocations vs perfect software fails even in his C boolean example.

Calling a C function takes up stack space. That's a hidden memory allocation which can exhaust just like heap does. But it's even worse because stack allocation failures don't have a place that return a nice clean NULL like malloc does. It's pretty strong to still call this case "perfect software" under his definitions.

janvidar
Right, he answers a question about this. His plan seems to be to pre-calculate the stack space required at compile time using static callgraph analysis.

This is not yet implemented.

Yoric
How would that work with: 1/ recursion; 2/ calling C code; 3/ being called from C?
white-flame
Yes, but I'm talking about his assertion about that C example earlier in the talk, not about Zig. It's pretty fundamental to his arguments.
p0nce
Where are the generative capabilities? It seems there is no macro-like thing in Zig, though there are comptime parameters (and I guess: monomorphized templates).
AndyKelley
https://ziglang.org/documentation/master/#Case-Study-printf-...
p0nce
Not nearly what you have in D.
qop
There are too many cool languages and too few weekends. It's becoming more problematic, but at the same time it would seem impractical to optimize my life around learning every single thing that I want to learn.

How do I decide which things are important enough? I am running out of time in my life also.

I watched a video a while ago about how the universe is expanding faster than light can travel, which means the portion of the universe that we can observe is an increasingly small subset of what's out there.

Sometimes my hobbies and wish-i-had-time-for-that projects feel the same way, they're expanding faster than I'll ever catch up to.

I wonder if zig will ever pursue some sort of memory safety. I find rust very difficult and unwieldy, but I can totally grok the appeal of RAII.

espeed
> I wonder if zig will ever pursue some sort of memory safety.

That's exactly what Zig is designed for [1].

Andrew Kelley (andrewrk) discusses this in the talk. Zig is similar to Rust, but with memory safety designed into the core, not bolted on as an afterthought. And as the SHA-256 demo tests in the talk show, Zig is as fast or faster than C.

[1] http://ziglang.org

[2] https://github.com/ziglang/zig

Yoric
I may be wrong, but if I read the documentation correctly, Rust looks more memory-safe than Zig.

What am I missing?

espeed
There was a HN discussion ~4 months ago that touched on some of this:

"Unsafe Zig Is Safer Than Unsafe Rust" https://news.ycombinator.com/item?id=16226235

Also see this previous thread on Rust's stdlib OutOfMemory errors:

"Containers should provide some way to not panic on failed allocations" https://github.com/rust-lang/rust/issues/29802

Yoric
> "Unsafe Zig Is Safer Than Unsafe Rust"

Yes, I've seen that blog post. Definitely a useful analysis, but it doesn't strike me as a fundamental difference, i.e. adding this check (as a warning) to rustc doesn't look too hard and wouldn't break existing code.

Also, as pointed out in the title, that's code explicitly marked as `unsafe` in Rust, so it might not be an entirely fair comparison :)

> "Containers should provide some way to not panic on failed allocations" https://github.com/rust-lang/rust/issues/29802

Definitely a problem, but it's not what people usually call "memory safety". The problem you mention is that of "fallible allocation". A typical definition of "memory safety" is that you can never get a memory access error.

Rust's behavior is indeed to panic in case of failed allocation, letting the developers catch the error at higher level. Barring any bug in rustc, that behavior is memory-safe.

Now, this behavior is quite opinionated and it turns out that it doesn't match all applications, so handling failed allocations is indeed being "bolted on as an afterthought". Just not memory safety :)

(especially since it's pretty hard to get a memory access error in Rust without deactivating safety checks, and it doesn't seem to be nearly as difficult in the current version of Zig)

whyever
Just to clarify, allocation is entirely implemented in the Rust standard library, not the language/compiler.
espeed
> i.e. adding this check (as a warning) to rustc doesn't look too hard and wouldn't break existing code.

One point of distinction: Zig does not have compile-time warnings, only errors. Simplicity by design. Either it will complie or it won't. Removing the uncertainty here means less greyarea, no space for edgecases in between.

Yoric
I realize that this should be an error rather than a warning. But one of the policies of Rust since 1.0 is to not break existing code, so any mechanism that detects error that were not previously detected must make it a warning, rather than an error, otherwise some existing applications and libraries would stop compiling overnight.

I generally find this a sane policy, but YMMV.

espeed
OT FYI: I just looked at your profile and noticed we have something in common. Your HND matches my DOB ~> 224
mschwaig
Rust has lifetimes and ownership as language concepts so that you have to be explicit about who owns a resource, for example memory, but it does not give you a lot of control about what to do when an allocation fails.

Zig is designed so that you can still peogrammatically deal with a failing allocation, as does well-written C, but it does not have a ownership system like Rust.

Yoric
Ok, got it. Apparently, espeed was using an unusual definition of "memory safety", which made their comment a bit odd :)
tom_mellior
What was unusual about that? Here is Wikipedia: "Memory safety is the state of being protected from various software bugs and security vulnerabilities when dealing with memory access, such as buffer overflows and dangling pointers.[1] For example, Java is said to be memory-safe because its runtime error detection checks array bounds and pointer dereferences."

This is the mainstream definition. Do you think Zig doesn't have this, or Rust has more of this than Zig?

What Rust does indeed have is freedom from race conditions, which can be viewed as a concurrent memory safety property. And what Rust also has is static checking of one of the above properties, namely dangling pointers (but not the other, namely buffer overflows). Do you think that if it's not statically checked, it's not safe? Because if you think that, you are wrong.

steveklabnik
Rust does no have freedom from race conditions. It does have freedom from data races. They’re different things.
Yoric
I was referring to espeed's "Zig is similar to Rust, but with memory safety designed into the core, not bolted on as an afterthought", which he later explained with two examples, one of which actually justifies the words "bolted on as an afterthought", but refers to local handling of fallible allocation.

While local handling of fallible allocation is a desirable feature for many applications, I confirm that neither me nor Wikipedia had never heard it classified as "memory safety" :)

tom_mellior
I agree that that aspect isn't relevant to what I would call memory safe. But how do you get from "espeed claimed something unreasonable" to "Rust looks more memory-safe than Zig"?
Yoric
> I agree that that aspect isn't relevant to what I would call memory safe. But how do you get from "espeed claimed something unreasonable" to "Rust looks more memory-safe than Zig"?

Well, there are several reasons. For instance:

- Rust tries very hard to prevent accidental mutations through aliases. In all the common cases and most of the uncommon ones, it works quite well.

- Rust ensures that your memory can't be deallocated twice and that you can't dereference a dangling pointer.

- Rust ensures that any object that needs to be accessed from several threads is either read-only or somehow protected.

- Rust ensures that a context may not deallocate an object it does not own, nor mistakenly believe that it still has ownership of an object after it has transmitted this ownership to another context.

At this (early) stage in the development of Zig, I have the impression that these aspects of memory safety do not exist in Zig, hence my comment.

Now, I realize that Zig has at least one memory-safety check that `unsafe` Rust does not have, so it is entirely possible that both languages concentrate on different aspects of memory safety and/or different implementation techniques.

I also believe that Zig is interesting and has lots of potential. But I suspect that Zig needs to grow a little before anyone can claim that Zig is more memory-safe than Rust (which is what I understand from the comment I was answering).

qop
I hate to say it, but this might be an instance where more jargon might help.

"Memory safe" can have numerous implications because there are numerous methods for attacking memory of a program. As far as we know, there's no language that handles EVERYTHING, so maybe we need to start qualifying what these languages mean. Rust landing page does this well, "no race conditions, move semantics" but then rust users will parrot "it's memory safe, it's memory safe" and they can't explain how to deal with failed allocations in rust. Not that that's necessarily a big deal, but it's just bad terminology for what is a rather complex issue.

It's almost like saying a language is "type safe". While reassuring, wtf does that actually mean? What kind of types? Is there polymorphism? Is there row types? Is there substructural typing? Type theorists generally have lots of terms to specify what flavor of types they're talking about, and that works well for them.

Rust is kind of bringing a new generation of younger hackers to systems programming and you darned kids need to get off the grass and start being more specific about memory semantics in PLT! /s but kinda serious

steveklabnik
Rust’s standard library types do not give you that control. The language doesn’t know abou allocation, and you can build whatever you want on top of it.
qop
#-#-# for Steve only

"you can build whatever want on top of it" is a flashy way to say that "rust can't do that" in this instance.

I suspect rust adoption would be much more effective if it wasn't presented as a panacea-like solution (it isnt) that the entirety of computer science has always dreamed of (it hasn't) and is ready to be used to write literally everything on earth in.

The hubris of the rust team empowers projects like zig, which are clearly communicating what their offering can and cannot do.

You're influential in the rust microcosm, why not communicate this? Why not build a campaign directly from the utility rust is actually presenting and not the pseudo-philosophical utility that mozilla erroneously assigns to rust? For a team that hacks bits and registers all day, I'm shocked how far rust evangelism has deviated from just the simple truth of the code.

I wanted to share this with you privately, but I couldn't find you on any platform I feel comfortable using, and for some reason the best tech news aggregator in the world still doesn't do privmsg yet, so I hope I can ask you to take the perspective if what I'm trying to communicate here and not the "why you talking so loud?!" perspective.

I see these little rust-isms where a defect or some aspect of rust is manipulated in a way as to empower it rather than illustrate the technical reality of it.

Rust not having the ability to deal with failed allocation is or will be a show stopper for somebody, somewhere, eventually. It should be portrayed as such (or fixed and loudly announced) rather than saying, "oh, build whatever you want on top of that behavior" as if it's some brilliant idea.

Disclaimer: I'm not a rust contributor.

Disdisclaimer: there's love in my heart for rust but the marketing is reaching counterintelligence levels of euphemism and misdirextion and I'm confident that you have the influence and awareness to begin to fix this in the rustverse

dbaupp
> "you can build whatever want on top of it" is a flashy way to say that "rust can't do that" in this instance.

It most definitely is not. "It" in "on top of it" refers to the language itself and the "core" subset of "std", which designed to cater for situations without any OS-level allocation at all, not the "std"s library's handling of OOM.

Given that Rust can work without allocation, it can work with custom allocation-failure handling too: as a minimal proof, start with core only (no std) and create custom Box and Vec types (this probably isn't the best path, but it shows it is possible).

> I suspect rust adoption would be much more effective if it wasn't presented as a panacea-like solution (it isnt) that the entirety of computer science has always dreamed of (it hasn't) and is ready to be used to write literally everything on earth in.

> The hubris of the rust team empowers projects like zig, which are clearly communicating what their offering can and cannot do.

The Rust team is good at clearly communicating the boundaries of Rust. Others may misinterpret that/be over-enthusiastic, but they're often called out (even by members of the Rust team) when it is noticed.

steveklabnik
Yes, thank you. To your parent, I know you said this was for me only, but this is what I’d say.

Well, with one more addition; you can see me doing exactly what you say in this very thread: https://news.ycombinator.com/item?id=17187648

I quite often tell people that Rust is not a panacea, or to not use it if it doesn’t fit their use-case: https://www.reddit.com/r/cpp/comments/8mp7in/comment/dzr3eot

tom_mellior
> And as the SHA-256 demo tests in the talk show, Zig is as fast or faster than C.

Do you know at what timestamp (roughly) this is mentioned in the video? YouTube makes it hard to quickly skip around in the video to find this. Or, better, a link to a written source? I'd be interested in what exactly is being compared. It's easy to cheat on benchmarks, especially when you aren't doing it on purpose.

espeed
Right at about the 20m mark: https://youtu.be/Z4oYSByyRak?t=20m2s
tom_mellior
Thanks!

Grabbing the sources from https://www.nayuki.io/page/fast-sha2-hashes-in-x86-assembly and compiling them more or less as recommended (and unlike they are compiled in the talk):

    $ clang -O3 sha256-test.c sha256.c -o sha256-test ; for i in 1 2 3; do ./sha256-test ; done
    Self-check passed
    Speed: 197.2 MB/s
    Self-check passed
    Speed: 196.1 MB/s
    Self-check passed
    Speed: 196.1 MB/s
    $ gcc -O3 sha256-test.c sha256.c -o sha256-test ; for i in 1 2 3; do ./sha256-test ; done
    Self-check passed
    Speed: 209.7 MB/s
    Self-check passed
    Speed: 209.2 MB/s
    Self-check passed
    Speed: 209.1 MB/s
This is a baseline for us. It has nothing to do with Zig and nothing to do with Andrew's machine (hardware or compiler versions).

But wait, the page above suggests that adding -march=native might help. Indeed it does:

    $ clang -O3 -march=native sha256-test.c sha256.c -o sha256-test ; for i in 1 2 3; do ./sha256-test ; done
    Self-check passed
    Speed: 255.6 MB/s
    Self-check passed
    Speed: 259.0 MB/s
    Self-check passed
    Speed: 254.3 MB/s
    $ gcc -O3 -march=native sha256-test.c sha256.c -o sha256-test ; for i in 1 2 3; do ./sha256-test ; done
    Self-check passed
    Speed: 275.1 MB/s
    Self-check passed
    Speed: 268.4 MB/s
    Self-check passed
    Speed: 270.0 MB/s
In the talk Andrew suggests that the difference might be due to using rorx instructions, which Zig might be able to do due to aggressive loop unrolling. Does -funroll-all-loops help GCC? It turns out that it doesn't, and that it cannot, on this program, because the C code for sha256_compress is already fully unrolled.

But anyway, are we using rorx instructions at all? We are, but only with -march=native:

    $ clang -O3 -S sha256.c -o - | grep -c rorx
    0
    $ clang -O3 -march=native -S sha256.c -o - | grep -c rorx
    542
And:

    $ gcc -O3 -S sha256.c -o - | grep -c rorx
    0
    $ gcc -O3 -march=native -S sha256.c -o - | grep -c rorx
    576
[Edit: Changed the grep from "ror" to "rorx", which changed GCC's numbers a bit; it does generate "ror" without x without -march=native.]

So. Testable theories:

(a) On Andrew's machine, the same setup but with Clang using -march=native would outperform or at least match Zig.

(b) Zig's compiler internally uses the equivalent of -march=native, possibly implicitly, at least in --release-fast mode.

Nothing here is meant to imply that Andrew is dishonest. There are just lots of variables to take into account, and sometimes we don't. Also, "slower/faster than C" by a few percent is not very meaningful if even C compilers disagree by 6% or so, and the same C compiler with the right flags disagrees with itself by a lot more.

CyberDildonics
I don't understand. Rust has memory safety as a core design principle. Zig has manual memory allocation like C.
tiehuis
Zig is memory-safe if you keep runtime checks enabled (e.g. with debug or release-safe optimization levels) but it does not have the compile-time guarantees of Rust. I don't think the parent comment is a fair reflection.

That being said, some other potentially interesting safety aspects that are present (or are being explored) to give some idea of the target audience:

  - compile-time alignment checks [1]
  - maximum compile-time stack usage/bounding [2]
I would expect safety at the end of the day in Zig will be more similar to modern C++ and its smart pointers, alongside (optional) runtime checks, than a full lifetime system. Will have to see what the future holds.

[1] https://ziglang.org/documentation/master/#Alignment

[2] https://github.com/ziglang/zig/issues/1006

skybrian
If Zig (or some other language) turns out to be the next big thing, we'll be hearing about it again and again.
himom
Personal taste, not factual One-True-Way™

It looks on the surface like almost Rust meets JS with a sprinkling of Ruby/Smalltalk.

Semicolons in 2018, really? Indentation and line-oriented parsing makes code more beautiful. Heck, in general, enums, arrays and dictionary literals shouldn’t even need commas if there’s one tuple per line. Extra typing is a waste of time and clutters-up code with distracting, Christmas ornament “blink tags” that go “Ho, ho, ho” when anyone walks by.

    # package name is the same as the directory path
    # module name is the same as the filename sans extension
    # big modules can be broken up into separate include files

    con X: int = 6   # constant

    var M: int = 17  # module-public variable

    mix any          # module-private mixin
        λ   any? -> bool
            each {x| if (Block? ? yield(x) : x): return true}
            false

    ext []: mix any  # module-private type extension

    typ T: int[10][8]
        λ Zero? -> bool: any? {x| x.any? {y| y == 0}}

    typ S
        a, b: int
        x, y: float
        s[7..6], t[5], u[4..2], _[1], v[0]: byte

        λ   Good?  -> bool: xGood? & yGood?   # no & / && distinction, precedence by expression type
        λ   xGood? -> bool: x > 0
        λ   yGood? -> bool: y > 0

    uni Q        # union
        I: int
        F: float

    λ   thisIsPrivate(x: int) -> int
        x + M + 1

    λ   ThisIsPublic(x, y: float) -> float
        π * (x + y - X)
flohofwoe
That's just your personal opinion, obviously. You need some sort of separators for putting several statements on the same line anyway, and requiring them everywhere is better than Javascript's or python's optional semicolons. Also I guess zig's main audience is C programmers, and semicolons are not one of the problems that need fixing in a "better C" language.
donpdonp
I've been building a toy project in zig in the last couple weeks. Zig is incredible. After only a short while working with it, it feels like deserves the title of a "better C". With no runtime overhead you get:

* Type inference - var name = "Bob";

* Maybe types (from Elm/Haskell) that prevent NULL ptr bugs and syntactic sugar encourages its use for function return values.

  if (doit()) |result| {
    // result is always a populated Struct
  } else |err| {
    // error is always a populated Error
  }
  
  // a return type of !Something means Error type or Something type
  fn doit() !Struct {
  }

* Nullable types - var ?&object = null; if you need it, and var &object = aObject; if you want the safety.

* Great C interop, though there is more work to be done here. (gtk.h for instance is too much for zig today) Zig's approach is unique here too. You specify .h files in the zig source code and the zig compiler translates C in the .h into zig. Its not a function bridge or wrapper, its zig objects and zig functions available to your zig code.

  const c = @cImport({
    @cInclude("curl/curl.h");
  });

  var curl = c.curl_easy_init();
  if(curl != null) {
    _ = c.curl_easy_setopt(curl, c.CURLoption(c.CURLOPT_URL), url_cstr.ptr);
  ...
* Standard library operations that allocate memory take an 'allocator object' parameter that should give great programmer control over memory management (I didnt get into this myself), webasm is a compiler target, and lots more
MaxBarraclough
Do nullable types and maybe types add much over checked dereferences (a la Java)?

I imagine they might be good for performance (fewer checks), but does it really help with correctness/convenience/elegance/readability/etc?

nonsince
Yes, because you get told about problems at compile-time rather than at runtime.
MaxBarraclough
I think I get it now: use different types for nullable and non-nullable pointers. Only non-nullable pointers can be dereferenced, and the conversion from nullable to non-nullable must be done explicitly, with different flow in the case that the pointer turns out to be null (which is to say, a non-nullable pointer will never enter scope).

Dereferencing null is impossible, and the programmer is forced to explicitly handle null values.

This contrasts with C, where the same type is used for a nullable and a non-nullable pointer, so the compiler can't help out, the programmer is at risk of forgetting/failing to keep track of the difference between the two, and null-dereferences may occur (and give you undefined behaviour).

Java references take the same approach as C pointers, except all dereferences are checked at runtime, and dereferencing null throws an exception.

I like it! It does better than the Never null, only cromulent values approach (like C++ references), as this can be inconvenient in practice. It makes null-dereferences impossible, and doesn't do anything funky that would introduce needless runtime overhead.

Yoric
In my experience from Rust & co, yep, this improves correctness a lot. Elegance, not so much.
zoul
Swift’s syntax sugar for Optionals makes working with nullable types much more elegant, too.
Yoric
My personal experience of Swift vs. Rust is that Swift's syntax sugar is more readable for most examples, but Rust lack of syntax sugar scales better to more exotic examples.

YMMV

whyever
I think some of the combinators can be elegant. E.g. `Option::map`.
dbaupp
I think you mean Result or Either like Haskell etc., not Maybe.

Additionally, Swift has essentially the same approach to C FFI.

May 13, 2018 · Boulth on Big Integers in Zig
Zig is really interesting, and the explicit allocators is a feature I miss in many other system languages (like Rust).

Presentation by the Zig author is a good introduction to the concept https://youtu.be/Z4oYSByyRak

bluejekyll
Custom allocators are coming in Rust: https://github.com/rust-lang/rfcs/pull/1398

While I haven’t had a need for this myself, I believe you can use that feature on nightly today.

Boulth
Excellent, thanks for the direct link to RFC!
Apr 07, 2018 · 2 points, 0 comments · submitted by zeronone
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.