HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Null References: The Billion Dollar Mistake

Tony Hoare · InfoQ · 184 HN points · 41 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Tony Hoare's video "Null References: The Billion Dollar Mistake".
Watch on InfoQ [↗]
InfoQ Summary
Tony Hoare introduced Null references in ALGOL W back in 1965 "simply because it was so easy to implement", says Mr. Hoare. He talks about that decision considering it "my billion-dollar mistake".
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
The subject has been discussed to death here on HN and in other places. If you're unfamiliar, this talk by Hoare (the inventor of implicit nullability) is as good a place to start as any:

https://www.infoq.com/presentations/Null-References-The-Bill...

Null References: The Billion Dollar Mistake by Tony Hoare

https://www.infoq.com/presentations/Null-References-The-Bill...

That's the third Hoare for me today.

1. Tony Hoare - "I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years." Null References: The Billion Dollar Mistake - https://www.infoq.com/presentations/Null-References-The-Bill...

2. C.A.R. Hoare - "The most important property of a program is whether it accomplishes the intention of its user."

3. Graydon Hoare - "Rust started as a personal project (on my own laptop, on my own time) in 2006. It was a small but real 17kloc native compiler for linux, mac and windows by the time Mozilla began sponsoring it in 2009-10."

EDIT: C.A.R. = Charles Antony Richard = Tony. Not to be confused with his twin brother C.D.R.

oaw-bct-ar-bamf
2. C.A.R. Hoare - "The most important property of a program is whether it accomplishes the intention of its user."

This is something I am guilty of too often. I get lost in fixing the minute details and edge cases that might come reality under very special circumstances only.

It is good to step back once in a while and focus on whether the thing you are working on actually has an effect on the user experience or user interaction with your product.

Don’t lose the big picture out of sight for all the small edge cases

xdavidliu
> C.A.R. = Charles Antony Richard = Tony. Not to be confused with his twin brother C.D.R.

I thought you were making a Lisp joke, and I Google for Cdr Hoare.

Tony Hoare called null his billion dollar mistake[1] for good reason. It’s especially remarkable given that Mr. Hoare is easily one of the finest computing scientists ever. Even the most brilliant can get it wrong. And only the most brilliant will admit it. Hats off to Sir Tony.

[1] https://www.infoq.com/presentations/Null-References-The-Bill...

KMag
He said that he knew it was a kludge at the time, but it was seductively easy to implement... just add a special case to the type checker so that if the type being assigned to a reference is wrong, allow it if it's null. Properly working nullable and non-nullable references (or making optionality and references orthogonal concepts) would have been a much deeper change to the language and the compiler.
If you want a couple other examples here's what I've got offhand..

Perhaps the original example, Hoare helped popularize null pointers, and then gave the "Null Pointers: The Billion Dollar Mistake" talk https://www.infoq.com/presentations/Null-References-The-Bill...

The creator of nodejs talking about some of its mistakes: https://www.youtube.com/watch?v=M3BM9TB-8yA

Nada Amin was part of the Scala team, and also wrote a wonderful paper about how Scala's type-system is fundamentally unsound: https://namin.seas.harvard.edu/publications/java-and-scalas-...

bradfitz was a core go team member, and wrote a post about what their net.IP type got wrong https://tailscale.com/blog/netaddr-new-ip-type-for-go/

I have no doubt there's many more examples too, but those are the ones I can think of offhand.

Mar 18, 2022 · 19 points, 13 comments · submitted by dlcmh
ljosifov
Not sure if it is possible to have "only valid object exists, ever" and have it working for real. In pointers we have NULL to signal "not a pointer". In floats we have NAN to signal "not a number". In integers I miss not-an-integer enough that I often designate INT_MIN as not-an-integer representation. I notice when working with indices, index 0 is a convenient not-an-index value. Don't know enough theory to figure if this empirical anecdote amounts to something more than that.
t0suj4
The simple existence of "Not an object" value is fine. The abstraction built over the raw value should be good enough that it will confront you when you attempt to use it.

The next level is constructing types in a way that you can avoid such low level details of whether the pointer is valid.

In other words, encode the state in the type instead of a value that you need to test against every time you want to use it.

anamax
That's easier said than done, and it's not obvious how to do it with types.

Consider a non-blocking read. What the caller does depends on whether the read returns a value or "no data available". If the type of the object returned depends on whether there was a value, you just replaced a null check with a type check.

You can avoid that check with a caller with multiple return points and have the callee pick between them. Exceptions are the most obvious way to do that. However, exceptions happen immediately, while type/null checks can be done later.

t0suj4
Callee can make that decision, it is called the continuation passing style.
anamax
Multiple continuations is "multiple exit points".

There aren't many languages which support multiple continuations and SQL isn't one of them.

oofabz
I agree that null pointers are a bad idea and I try to avoid them in my code as often as possible. I go out of my way to avoid returning null to indicate an error condition. I use value types instead of reference types whenever possible, since a value type can't be null. Instead of using null to indicate an error condition, I use a second variable or throw an exception.

Just because your programming language supports null pointers, doesn't mean you need to use them. You can avoid the pitfalls discussed by Tony Hoare by simply refraining to use them. You will still need to handle null pointers coming into your code from outside, like from libraries, but your own code does not need to generate them.

bsenftner
Please, stop this nonsense. It is not as if the NULL/zero value was introduced after some code was written. This is a human written logic error, programmer(s) write logic and that logic will sometimes have bugs. This is someone getting paid to state an obvious fact and then throw philosophical phrases around in an attempt to sound like they are doing something. This is pure time wasted nonsense.
musicale
On the other hand, having messages to nil do nothing in Objective-C had the interesting effect of making some iOS apps crash gradually rather than all at once. One might argue that it improved robustness in a strange way.
anamax
The problem with NULL is that there's only one.
jfengel
I particularly appreciate that as a billion-dollar mistake, Javascript decided to make it twice (null and undefined).
zwerdlds
The question is, what is the order of the mistake. When you have two, is it linear ($2b), quadratic ($1e18), or exponential?
voobster
Probably logarithmic due to diminishing returns. Having 20 nullish types isn't much worse than having 2. The biggest difference is going from 0 to 1.
jka
This reminds me of a microquest that I foolishly embarked on during COVID-19 lockdown:

There's no standard, widely-accepted way to unambiguously serialize and deserialize nulls within querystrings. JavaScript is perhaps one of the more frequent client languages for that, but Python and many other server-side languages are affected too. Could we do something about that?

I liked the look of the proposal at https://github.com/whatwg/url/issues/469 that was working towards this.

In short, it provides the ability for "?a&b=" to represent the variable "a" containing null and "b" containing an empty-string.

Mar 15, 2022 · kevinmgranger on Go 1.18
In case anyone is interested, here are my technical critiques of Go:

- Boilerplate increases the surface area that a bug can hide in. The fact that most of the boilerplate is around error handling is especially worrying. Yes, the flexibility of "Errors are values"[1] is nice. But I also don't know any languages where errors _aren't_ values, so the main value add seems to be reduced boilerplate compared to individual try/catches around each function call.

- Go manages to repeat the Billion Dollar Mistake[2]. Things like methods working on nil receivers is cool, but not worth the danger or messiness.

- Even worse, for a language that claims to value simplicity, the fact that nil sometimes doesn't equal nil[3] is... honestly, I can only consider that a bug.

[1]: https://go.dev/blog/errors-are-values

[2]: https://www.infoq.com/presentations/Null-References-The-Bill...

[3]: https://go.dev/doc/faq#nil_error

uryga
> But I also don't know any languages where errors _aren't_ values

(look, i know you understand how exceptions work, please bear with me)

yes, Exception objects are technically values, but you don't return them to the caller; you throw them to the, uh, catcher. basically, you get a special, second way of returning something that bypasses the normal one! but in the EaV approach, errors just are returned like every other thing.

the uniformity of EaV comes in handy when doing generic things like mapping a function over a collection - don't have to worry if something will throw, because it's just a value! and that lets you go to some pretty powerful places w.r.t abstraction: see e.g. haskells `traverse`.

but yeah, EaV needs some syntactic sugar to reach ergonomic par with exceptions, otherwise you get if-err-not-nil soup :P

int_19h
Exceptions can still be treated as syntactic sugar for EaV; it just means that every expression has an implicit unwrap-and-propagate.
uryga
are there any languages that do this? Rust's `?` is similar but the propagation is explicit
int_19h
That's what I'm saying - any language with exceptions can be treated as if it was all Rust-style Result<T, E>, but with implicit ? after every expression. Well, and E is an open variant (like e.g. extensible variant types in OCaml), unless the language has checked exceptions like Java.
kevinmgranger
Definitely. But Go's boilerplate for error handling is an overcorrection from the implicit propogation. As folks have mentioned, Rust's `?` or Swift's `try` strike a nice middle ground.
kevinmgranger
Right, so the concept in Go is misnamed. It's not Errors are Values, it's Errors Have Normal Control Flow.

Returning errors makes many things more manageable, definitely. But where they really shine, like in the mapping example, isn't possible in Go. Unless I'm mistaken with how go generics work.

(By the way, I'm a huge fan of how error handling works in Rust and other related functional languages. Definitely not advocating for the classic way of doing Exceptions).

uryga
> But where they really shine, like in the mapping example, isn't possible in Go.

oh yeah, definitely! Go's version of EaV with multiple returns is pretty lackluster compared to a proper Result type. afaict it's kind of "the worst of both worlds" -- all of the boilerplate of plumbing errors manually w/ none of the benefits.

kevinmgranger
Folks are saying that sum types might eventually come to go after some experimenting with generics. It'll be interesting to see where it goes.
One philosophy is that zero input basically never be a valid input unless you're taking in integers. https://www.infoq.com/presentations/Null-References-The-Bill... . The idea is to try and never create null objects. I think it's interesting and helpful in practice to check/fail early.
samhw
> The idea is to try and never create null objects

This is one of those things which sounds great in practice, and survives approximately 17 seconds of working on a real codebase.

SQueeeeeL
Yeah, I think this might be a tab/spaces mixing situation like python had, where it had to be restricted by the language compiler because it's just so tempting.
simiones
> One philosophy is that zero input basically never be a valid input unless you're taking in integers

This is obviously not possible, since you can imagine lots of complex objects are all integers or themselves composed of integers, and for this entire class of objects, the zero value usually makes sense.

For example, a 3-vector (in the physics sense) is a 3-tuple of integers `type Vec3 struct {x, y, z int}` , `Vec3 {0, 0, 0}` being the origin. How does a function specify that it can take an optional 3-vector? Even worse, you can imagine a struct for a potenitally 0-volume cube represented as 8 Vec3s, one for each vertex, whose 0 value is again a valid object.

Of course, you could gratuitously add an `IsValid bool` flag which must be true to the class, but the cube example shows how this quickly becomes annoying and bloated.

SQueeeeeL
In this case, the memory pointer for Vec3 would not be null? It will be a defined memory address which points to 3 zeros. I think it's a difference of objects and primitive data types.
simiones
The discussion was that in Go, given the current lack of generics, you have two options to have an optional parameter:

1. Take a *Vec3, using nil as the "optional is missing" value.

2. Take a Vec3, but consider some value as "invalid", often the 0-value as possible.

1 invokes exactly the issues in Hoare's essay. 2 doesn't work for all types, as some types don't have any bit pattern that are not useful, this is what I was trying to point out.

BenFrantzDale
Why in almost 2022 do people put up with a language lacking generics?
simiones
I don't know about others, but we chose to use Go despite the problems with the language itself because we liked several properties of the Go runtime. Particularly, the fact that Go is garbage collected (so it's memory safe), but has much lower memory and start time overhead than even the latest versions of Java or .NET or even Haskell; and the ecosystem is much greater than something like OCaml. This low overhead is very useful for a microservices-based application.

For my team, we would have loved to have something like C# (or even Java), with generics, exceptions, lambdas, LINQ/Streams etc., but with Go's low overhead and large ecosystem. Go the language was a major negative point, just not bad enough to outweigh the advantages of the runtime.

SQueeeeeL
I can't think of any situations where (1) would be insufficient, but I'm probably wrong. But I agree that setting an isValid bit for every object is overkill.
saghm
I think the point is that it would be nice to be able to express the idea of a value not being present without having to risk accidentally trying to dereference a nil pointer. That's what Option brings to the table, although arguably Go could just provide an implementation of that like they do for slices and maps without actually having to introduce full generics to the language.
chakkepolja
apart from what sibling comment mentions, declaring as pointer will often result in a heap allocation in Go.
simiones
Well, in the link that you shared, Tony Hoare explains in quite some detail why having support for null pointers is not great.

Additionally, a *Vec3 is more than `Vec3` or `null`. For example, the following code will behave very differently based on whether optionalVec is Vec3 or *Vec3.

  optionalVec := Vec3{0, 0, 0}
  foo(optionalVec)
  fmt.Printf("theVec: %+v", optionalVec) 
    // would print {0, 0, 0} with an Optional[Vec3] type
    // will actually print {2, 0, 0} with *Vec3
  ...

  func foo(optionalVec *Vec3) {
    if optionalVec != nil {
      optionalVec.x += 2 //we use a different origin
    }
    [...]
  }
Jtsummers
But many types do have a "natural" zero value, that you may want to use, beyond just the numbers. Consider the empty string, empty list, empty dictionary, empty set. Those are zero values for their types. By reserving them as pseudo-nulls you lose the ability to distinguish between an object which happens to be empty and an object which is meant to mean "null". This is where Optional (or similar mechanisms) come in handy. If you actually want to avoiding creating null, Optional is a cleaner and more consistent way to accomplish this without needing to magic up a null-value in your type's valid set of values or create a one-off Optional (MaybeDictionary, MaybeList).
saghm
It's even worse than that in my opinion: the zero value for maps and slices is actually nil even though they're not pointers, meaning the empty values are not the zero values. This lets you do "fun" things like this (my memory of the syntax might be slightly off):

``` var map1 map[int]int = nil var map2 map[int]int = &map1 var map3 map[int]int = nil ```

Interface types also can be nil even if they're not explicitly pointers because heap pointers are needed under the hood due to the size of the value not being knowable at compile time. You can pretty quickly get into some hairy situations trying to ensure that a value isn't nil when combining pointers or maps/slices with interfaces.

I really wish that separating the concerns of present versus not present from value versus reference was more mainstream. I understand why this was traditionally the case for lower-level languages, but even higher-level ones like Java are mostly based on the paradigm that only references can be "not present". This irks me in a similar way to how earlier versions of Java wouldn't allow default implementations of interface methods and instead requiring abstract classes, which classes could not extend from more than one of.

> I would guess null is the default because many of us start out with languages like C or Java.

Go is not a language that "just happened" (what could be said of JS and PHP). Go is designed. Designed by heavyweight language designers (from Wikipedia): Robert Griesemer, Rob Pike and Ken Thompson.

These people knew of Haskell, type systems, and the merits of type safety. I would expect them to know of "billion dollar mistake[1]" that is null. But sadly Go still carries on with the mistake.

I too care more about null-safety and proper sum types (in combination with nice switch/match statements and/or pattern matching) than generics. In the Elm language I found an experience that not having generics is perfectly okay (just a little annoying sometimes).

I find "not null safe languages" not okay nowadays, and I hold this opinion since before Go's first appearance (2009). I really wonder how the designers came to this decision.

I'm afraid this mistake can never be fixed, as null checks are already idiomatic Go. Java also could not fix it (which may be one of the main reasons behind Kotlin). Maybe the best thing we can hope for is Kotlin kind of language for Go (question marks after types to indicate nullability). It's just sad.

[1]: https://www.infoq.com/presentations/Null-References-The-Bill...

chrisseaton
> I would expect them to know of "billion dollar mistake[1]" that is null. But sadly they had not.

What makes you think that they didn’t know about it, rather than that they did know but decided they weren’t interested in the trade offs for this particular new language.

cies
I misworded what I thought. I think they knew but have no clue why they went ahead with the mistake non the less.
5e92cb50239222b
>Java also could not fix it (which may be one of the main reasons behind Kotlin).

C# added some null-safety features despite having a 20-something year baggage of legacy code. While it might not be the perfect solution (these checks work only at compile time and you can enable them per-file), I find that they work great for new projects. You have to be extra careful at boundaries (interfaces with third-party libraries which have not added ? annotations yet, API calls, and so on), but they save a lot of headache inside your own code.

cies
Cool. It's a bit like what Kotlin did. Yet after some experience with Kotlin I found it is not perfect. "You have to be extra careful at boundaries" exactly, with Kotlin too.

The question mark is the best thing if you have not build your language with null safety to start with. Please look at Elm for a good example of what real null safety looks like.

With Go they had to change to do the right thing from the start. I really wonder why they didnt.

lazulicurio
Also worth noting that Java may be able to fix it in the future after Valhalla drops (and this may mean better implementation for JVM-based languages too).
cies
I sure hope so. Not holding my breath though. :)
Sep 03, 2021 · 1 points, 0 comments · submitted by ZeljkoS
Personally, i really like having multiple return values, since being able to give a function multiple inputs but only being able to return a single thing always felt weird - if your require any metadata in a language like Java, then you'd have to come up with wrapper objects and so on.

That said, i really dislike the following from the article:

  if (error) {
    // you can handle the error as you see fit
    // you can add more information, end the request, etc.
  }
To me, that's an example of "opt in" error handling, which in my eyes should never be the case. The compiler should force you to handle every exception in some way, or to check for it. My ideal programming language would have no unchecked runtime exceptions of any sort - if accessing a file or something over a network can go wrong in 101 ways, then i'd expect to be informed about these 101 things when i make the dangerous call.

Handling those wouldn't necessarily have to be difficult, in the simplest case just wrap it in an implementation specific exception, like InputBufferReadException regardless of whether you're working with a file or network logic and let them bubble upwards to the point where you actually handle them properly in some capacity, be it with retry logic or showing a message to user, or letting external calling code handle it.

Why? Because whenever you're given the opportunity to ignore an exception or you're not told about it, someone somewhere will forget or get lazy and as a consequence assumptions will lead to unstable code. If NullPointerExceptions in Java were always forced to be dealt with, we'd either have nullable types be a part of the language that's separate from the non-nullable ones (like C# or TypeScript i think), or we'd see far more usages of Optional<T> instead of stack traces in our logs in prod, because we wouldn't be allowed to compile code like that into an executable otherwise.

Of course, that's my subjective take because of my experience and things like the "Null References: The Billion Dollar Mistake": https://www.infoq.com/presentations/Null-References-The-Bill...

I think languages like Zig already work a bit like that: https://ziglang.org/learn/overview/#a-fresh-take-on-error-ha...

masklinn
> Personally, i really like having multiple return values, since being able to give a function multiple inputs but only being able to return a single thing always felt weird - if your require any metadata in a language like Java, then you'd have to come up with wrapper objects and so on.

MRV is nice and useful, and “error as value” languages usually have ways to return multiple values (usually in the form of tuple), but it’s not proper and correct for error signalling, because the error and non-error are almost always exclusive.

In that case, using MRV means you have to synthesise values for the other case (which makes no sense and loses type safety), and that you can still access the “wrong” value of the pair.

> To me, that's an example of "opt in" error handling, which in my eyes should never be the case. The compiler should force you to handle every exception in some way, or to check for it.

That is what Rust does (including a clear warning if you drop a `Result` without interacting with it at all), although for convenience reasons (because it doesn’t have anonymous enums and / or polymorphic variants) the errors you get tend to be a superset of the effectively possible error set.

Though that’s also a factor of the underlying APIs, when you call into libc it can return pretty much any errno, the documentation may not be exhaustive, and the error set can change from system to system. Plus the error set varies depending on the request’s details (a dependency which again may or may not be well documented and evolving).

So when you call `open(2)`, you might assume a set of possible errors which is not “everything listed in errno(3) and then some”, but a wrapper probably can not outside of one that’s highly controlled and restricted (and even then it’s probably making assumptions it should not).

tialaramex
Does a panic count as "handling" the error?

I actually agree with Rust's choice here. You, the programmer, know whether some particular error is something you can cope with or not and it's appropriate to panic in the latter case. Where you draw the line is up to you, in a ten line demo chances are "the file doesn't exist" is a panic, in your operating system kernel maybe even "the RAM module with that data in it physically went away" is just a condition to cope with and carry on.

My litmus test here is Authenticated Encryption. The obvious and easy design of the decrypt() method for your encryption should make it impossible for a merely careless or incompetent programmer to process an unauthenticated decryption of the ciphertext. This makes most sense if you have an AE cipher mode, but it was already the correct design for both MAC-then-Encrypt or Encrypt-then-MAC years ago, and yet it's common to see APIs that didn't behave this way especially on languages with poor error handling.

In languages with a Sum type Result like Rust, obviously the plaintext is only inside the Ok Result, and so if the Result is an Err you don't have a plaintext to mistakenly process.

In languages with a Product type or Tuple returns like Go, it's still easy to do this correctly, but now it's also easy to mistakenly fill out the plaintext in the error case, and your user may never check the error. Dangerous implementations can thus happen by mistake.

In languages with C-style simple returns, it's hard to do this properly, you're likely using an out-buffer pointer as a parameter, and your user might not check the error return. You need to explicitly clear or poison the buffer on error and even then you're not guaranteed to avoid trouble.

In languages with Exceptions, the good news is that the processing of the bogus plaintext probably doesn't happen, but the bad news is that you're likely now in a poorly tested codepath that isn't otherwise taken, maybe far from the proximate cause of the trouble. Or worse, your user wraps your annoying Exception-triggering decrypt method and repeats one of the above mistakes since they don't have better options.

masklinn
> Does a panic count as "handling" the error?

Undeniably? Fundamentally the language proposes, the developer disposes[0] and short of Rust being a total language, panics were going to be a thing.

So while one can argue that the ability to panic should not be so prominent, it's certainly an error handling strategy which was going to be used anyway, is perfectly valid (in some situations), and is convenient when you're designing or messing around.

Hell, even ignoring an error is a perfectly valid handling strategy, and indeed pretty easy to implement, just… explicit (though not the most visible sadly, it's much harder to grep a `Result` being ignored than one being unwrapped or expect-ed).

The important bit is that Rust warns you about the error condition(s), and lets you decode on how to handle it.

[0] though there are panicing Rust APIs where it doesn't just propose

Joker_vD
> Does a panic count as "handling" the error?

> You, the programmer, know whether some particular error is something you can cope with or not and it's appropriate to panic in the latter case.

Please don't. I've seen enough libraries whose authors had exactly this mindset; I do not enjoy when some fifth-party dependency thrice-removed, upon encountering an unexpected circumstance, decides that it can't bear to live in this cruel world any more and calls "abort()", killing the entire process: which happens to be a server process, running multiple requests in parallel for which a failure to serve any single request for any reason whatsoever does not warrant aborting all other request.

hypertele-Xii
> The compiler should force you to handle every exception in some way, or to check for it.

This is the single most unproductive mis-feature a language could have for me. Programming is already a tedious excercise of wrangling your thoughts into an alien form the computer can understand. You want, on top of everything else, the computer to refuse to run your program at all, unless you explicitly handle every possible edge case?

I get that some people are engineers with rigid requirements. I'm an artist - I sculpt the program to produce output I'm not entirely clear on. I'm trying to make the computer to interesting, unexpected things.

Say I'm making a game. I wanna load a character sprite from an image file and draw it on the screen. Do I really need to handle all the possible ways that file could fail to load right now, before even seeing a preview of what it should look like? Hell no!

It's like having an assistant who refuses to do anything unless you specify everything! Hey assistant, get me a coffee. "I refuse to get you a coffee because you didn't specify what I should do in case the coffee machine is broken." Aargh!

KronisLV
> You want, on top of everything else, the computer to refuse to run your program at all, unless you explicitly handle every possible edge case?

Precisely!

Even better - let the IDE suggest to you all of the possible exceptions and when you're feeling lazy or are hacking away at a prototype, either let it add a "throws SomeException" to the method signature and make it someone else's problem up the call chain, or just add a catch all after you've handled the ones that you did want to handle!

After all, none of us can recall the hundreds of ways network calls can get screwed up, but we're pretty sure what to do at least in a subset of those, but we'd also forget about those without these reminders. Not only that, but when you're writing financial code or running your own SaaS, you'll at the very least will want your error handling code to be as bulletproof as the guarantees offered to you by your language's rigid type systems.

Then, when you've finished hacking together your logic, your instance of SonarQube or another tool could just tell you: "Hey, there are 43 places in your code where you have used logic to catch multiple exceptions" and then you could review those to decide whether further work is necessary, or whether you can add a linter ignore comment to the code explaining why you don't want to handle the edge cases, or just do so in the static code analysis tool, so all of your team members know what's up.

Alternatively, if you're just writing something for yourself, just leave it as it is, knowing that if you'll ever need to publish your code for thousands of others to use, then you probably should go back to those now very visible places and review it.

So essentially:

  /** 
    * Attempts to load a Sprite from a file. You can then use the instance to display it on screen.
    * @param file This is the file that we want to load the image from. Use relative path to "res" directory.
    * Our engine loads PNG files and technically can also load GIF files because someone hacked that functionality together in an evening. 
    * That's kind of slow though, so we should use PNGs whenever possible. See ENGINE-33452 for more details.
    * @return A Sprite instance that you can pass to the rendering logic to put it on the screen, or alternatively process the loaded image in memory.
    */
  public Sprite loadSprite(@NotNull File file) throws SpriteGenericException, FileSystemGenericException {
    try {
      return FileSystemSpriteLoader.loadPNG(file);
    } catch (ImageWrongFormatException e) {
      wrongImageFormatLogger.warn("We found a " + e.getActualFormat() + " format file: " + file.getPath(), e); // the art team should have a look at this
      if (e.getActualFormat().equals(ImageFormats.GIF)) {
        return FileSystemSpriteLoader.loadGIF(file); // TODO unoptimized call because we needed GIFs for ENGINE-33452, remove later
      } else {
        throw SpriteGenericException("We failed to load sprite from file: " + file.getPath() + " because of wrong format: " + e.getActualFormat(), e);
      }
    } catch (SpriteCorruptedException e) {
      brokenImageLogger.warn("We found a corrupted sprite in file: " + file.getPath(), e); // maybe the pipeline is broken again?
      throw SpriteGenericException("We failed to load sprite from file: " + file.getPath() + " because of image corruption", e);
    } catch (Exception e) { // TODO ENGINE-44551 handle the file system access cases later once the API is stable and we know how it'll work on Android
      throw FileSystemGenericException("We failed to load sprite from file: " + file.getPath(), e);
    }
  }
I prefer software blowing up in predictable ways as opposed to doing so unexpectedly. Even Java is vaguely close to being what i'm looking for, however unchecked exceptions simply isn't acceptable from where i stand.
hypertele-Xii
If I had to write that kind of boilerplate every time I had an artistic inspiration, I'd never ship anything!

We are on far apart sides of a wide industry. I couldn't work productively in your dream language but hey, I'm happy we can have our different tools for our different needs. More power to us! :)

> let the IDE suggest to you all of the possible exceptions

So, programming without an IDE becomes untenable. I use a text editor. It feels like you're shifting language features into the IDE. What's the difference between the compiler doing it automatically vs the IDE doing it automatically?

KronisLV
I definitely agree that we're on the complete opposite ends of a wide spectrum of concerns and goals!

> So, programming without an IDE becomes untenable. I use a text editor. It feels like you're shifting language features into the IDE. What's the difference between the compiler doing it automatically vs the IDE doing it automatically?

I very much agree with this observation, but from the opposite side - for many development stacks and frameworks, working without an IDE feels like being a fish out of the water, since there are numerous plugins, snippets and integrations that provide intelligent suggestions, auto-completions and warnings about things that are legal within the language but are viewed as an anti-pattern.

I'd say that the difference between the two is pretty simple, just a matter of abstraction layers. Something along the lines of:

  - the business people have certain abstract goals, which they can hopefully synthesize into change requests
  - the developer has to implement these features, by thinking about everything from the high level design, to the actual code
  - the IDE takes some of the load off from the developer's shoulders, by letting them think about the problem and offering them suggestions, hints and assistance of other sorts to help in translating the requirements into code; of course, it's also useful in refactoring and maintenance as well, letting them navigate the codebase freely
  - the language server, linter, code analysis tools, plugins, AI autocomplete and anything else that the developer should want hopefully integrate within the IDE and allow using them seamlessly, to make the whole experience more streamlined
  - the compiler mostly exists as a tool to get to executable artifacts, while at the same time serving as the last line of defense against nonsensical code or illegal constructs
In essence, the IDE gives you choices and help, whereas the compiler works at a lower level and makes sure that any code (regardless of whether written by the developer with an IDE, one with a text editor or an AI plugin) is valid. In practice, however, the parts that the IDE handles are always more pleasant because of the plethora of ways to interact with it, whereas the output of a compiler oftentimes must be enhanced with additional functionality to make it more useful (for example, clicking on output to navigate to the offending code).

In my eyes, the interesting bits are where static code analysis tools and linters fit into all of this, because i think that those should be similarly closely integrated within the ecosystem of a language, instead of being seeked out separately, much like how RuboCop integrates with both Rails and JetBrains RubyMine. Our views may differ here once again, but i think that some sort of a convergence of tooling and its concerns is inevitable and as someone who uses many of the JetBrains tools (really pleasant commercial IDEs), i welcome it with open arms.

hypertele-Xii
Ohh, you could have dependency management built into the IDE (probably already do, I don't know). An integrated profiler could tell you how fast a function is as soon as you write it. I'm getting funny ideas.

What if the IDE worked with a distributed function database, rather than flat text files? Where you could browse (shop?) all the code written by others, by licence, performance, etc.?

Wonder if there are any programming streams/channels I could uh, spy IDE-based development from.

dthul
I don't quite follow. You always have to somehow handle the case the file does not load successfully. In exception languages that handling might be implicit (raise an exception and crash your program) and in "errors as values" languages you at least have to acknowledge that it could go wrong with something like `image.unwrap()` (which turns it into a program aborting panic).
hypertele-Xii
> You always have to somehow handle the case the file does not load successfully. In exception languages that handling might be implicit

I.e. you don't have to handle it.

Until you're polishing the program for a stable release, that is.

dthul
Right, in both approaches you can choose to handle the error by ignoring it and crashing. In "errors as values" languages you have to make that choice explicit by marking the line with `unwrap`. Saying that this requirement is "the single most unproductive mis-feature a language could have" is extreme hyperbole, no? Adding `unwrap`s during development to imitate implicit exceptions for fast prototyping takes no time or thought at all.

On the contrary, when you later want to polish your program for release these explicit markings make it very easy to find the points in your code where errors can occur and which you don't properly handle yet.

hypertele-Xii
Okay, if there's a simple way to mark some code as "compile this even if it's wrong", it's only a minor annoyance.

But the commenter I responded to seemed to me to be wishing for a language that explicitly disallows that. Maybe I misunderstood?

wvenable
One of my personal favorite examples of exception handling was a small GUI app with a single top-level exception handler at the event loop that displayed an error message and continued.

That application was extremely robust. You try and save a file and 100 different things could go wrong (network drive unavailable, file is read-only, etc) but it nicely recovered and you could see what the problem was, correct it, and re-save. One single exception handler for the whole app.

Buffer overflows are older than C.

One of the reasons for the decline of the British computer industry was Tony Hoare, at one of the big companies (Elliott Brothers, later part of ICL), implemented Fortran by compiling it to Algol, and compiled the Algol with bounds checks. This would have been around 01965, according to his Turing Award lecture. They failed to win customers away from the IBM 7090 (according to https://www.infoq.com/presentations/Null-References-The-Bill...) because the customers' Fortran programs were all full of buffer overflows ("subscript errors", in Hoare's terminology) and so the pesky Algol runtime system was causing them to abort!

I've been following Nim for a while now (including back when it was called Nimrod), but the big reason I've never dug much more into it is because it repeats the Billion Dollar Mistake[1] of allowing values (yes, not all values, but important ones) to be nil without explicitly using Option types.

It's disappointing that Nim has not (perhaps cannot, for backwards compatibility) learned the same lesson here that most other modern languages have, and used explicit nilability embedded in the type system.

And to preempt the argument that "you can't, for performance reasons!", you could do the same thing as Rust does and explicitly opt-in to having your code break if something is nil, via a call like `.unwrap()` which the compiler may optimize away.

[1] https://www.infoq.com/presentations/Null-References-The-Bill...

rayman22201
This is currently being remedied as we speak:

https://github.com/nim-lang/RFCs/issues/250

https://github.com/nim-lang/Nim/pull/15287

planetis
> ... allowing values (yes, not all values, but important ones) to be nil ...

What!? What that even means? Value types can't be nil in nim-lang, these are always initialized, but you can use Option[T] when you need it.

If you mean reference types (`ref`) hopefully this PR will land soon https://github.com/nim-lang/Nim/pull/15287.

Conlectus
Reference types were indeed what I was referring to.
rezeroed
https://nim-lang.org/docs/options.html
cheriot
The real question is: how much of standard libraries and broader ecosystem use them? Scala has null values, but broad usage of Option and Either so I can often write code as if null didn't exist.
elcritch
Unfortunately, it'd be impossible to work effectively with C code without `nil`. You can create objects set to `not nil`. To be fair, I've had a few points when creating a new type that I forgot to allocate a new instance properly, but option types wouldn't have saved me any work. It would've produced a similar stacktrace. I think the compiler produces a warning, but I'm still working down the warning's lists. You can see more discussion on defaulting not nil here: https://github.com/nim-lang/Nim/issues/6638
Conlectus
Having all references that are returned from C be treated as Option types would resolve this difficulty, no?

Likewise, your type system can prevent you from using uninitialized values in other languages, without the need for Option (and indeed the unwrap() call you imply you would have used) from being needed.

Though yes, I'm glad to see this is being addressed :).

elcritch
> Having all references that are returned from C be treated as Option types would resolve this difficulty, no?

Perhaps, but there's more than just returns. Dealing with most C code there's a fair bit of pointer passing and manipulation. It's nice being able to deal directly with C code and not have to worry about the impedance mismatch, but still be able to move up the type system. It's more a pragmatic choice.

> Likewise, your type system can prevent you from using uninitialized values in other languages, without the need for Option (and indeed the unwrap() call you imply you would have used) from being needed.

In this case, mostly its just me not using the type system well. :-) But that's the price I pay for a using a flexible language I can readily use in embedded work. NPE's really don't cause me any headaches compared to most of the other aspects of integrating heavily with C code in embedded systems.

nine_k
All the convenience and all the unsafety of C code while interacting with C code? Why not just use C, or C++ if expression power is needed?
elcritch
Same reason as using unsafe Rust to interface w/ C/C++, to wrap the the unsafe bits and make safer API's asap. Nim's macros also making interactions with C API's safer. To be clear, I'll be glad when Nim checks these things by default (the Z3 checker will be great for that!).
alehander42
I am the guy assigned to work on that: sorry for the delay, but it is being worked on.
Conlectus
Excellent news! I read the discussion that has since been linked elsewhere, and got the general impression that there wasn't going to be a change for backwards compatibility reasons. Glad to be proven wrong!
alehander42
the default might not be changed in 1.x, but this shouldn't make it less typesafe: access to nilable types would be checked.

there are also the z3-integration related checks which might even apply to index bounds or eventually other invariants, so this kind of safety is important for Araq and Nim https://nim-lang.org/docs/drnim.html

Tony Hoare, famously, called null references a "billion dollar mistake"[0]. Much preferred to checking return values is to use a language which models null references in the type system doesn't compile code that would try to use a null value as if it weren't null.

Ok, fine, but for whatever reason, some of us don't have the option of working in such a language. What to do then? The author seems to think it's obvious that the answer is you check for null in any function that could possibly receive a null value. This means you have to check for null for every reference type in your system, which is madness.

I agree with the programmers who the author thinks "Do Not Get It". When you have a null reference bug, you fix the bug where the reference is generated, not where it's used. If you fix it at the place where it's used, you've barely fixed anything, because the place where it's generated will continue to send nulls to unsuspecting callers who will blow up when they assume it's non-null.

It sounds to me like the fix here is to write a validation routine for the configuration to ensure that it doesn't vend null values to the rest of the program when it shouldn't. The service can validate its config at load time and fail to start up if it's invalid. This will block your deployments, but your customers won't be impacted and your code won't be littered with null checks that fail to address the root cause.

[0] https://www.infoq.com/presentations/Null-References-The-Bill...

Jan 11, 2020 · 93 points, 150 comments · submitted by wheresvic3
LorenPechtel
Count me amongst those who do not think they're a mistake. You need to indicate no-data-here in some fashion. If you try to use that no-data in some fashion having your program blow up from a null reference is a feature to me--in the vast majority of cases it's better go boom than silently continue doing something wrong. In the few where that's not the case you can trap the exception and go on.

The real solution is what has been done with C# in recent years--have the compiler track whether a field can contain a null or not and squawk if you try to dereference something you haven't checked. That causes it to blow up in the best place--compile time rather than runtime.

littlecranky67
> You need to indicate no-data-here in some fashion.

Well languages that have non-nullable types still allow you to do that - you just have to be explicit about it. In TypeScript (using --strict mode which disables nullables), youd need to define the type as union type and make use of the "null" type: `let a: string | null`. So you tell the compiler that either "null" is a valid value for a or a string. The compiler will assist you and catch possible bugs (such as dereferencing a without checking it against null first)

kroltan
Yes, which is where types like `Optional` come in. If you make a language where null doesn't exist by default, but still provide a standard way of indicating non-presence, you get the advantage of compile-time correctness checking.

Also, the compiler can still optimize the `(hasValue, value)` tuple into a possibly-0 pointer when the type of the value is a pointer. (which by the way, is exactly what Rust does, among others)

iCarrot
They are called Nullable types in C# and must be declared with `?` after the type.

But, Nullable<T>.HasValue check is not forced and Nullable<T>.Value will throw a different exception instead if it is null (InvalidOperationException).

to11mtm
Well, Depends on which type you are referring to (Which is part of what pains me with nullable ref types, as nice as it is to have)

If it's a value type (T), ? will make it Nullable<T> and provide the behavior described.

Reference types however can always be null, and do not have a .HasValue as exampled above. However newer versions of C# let you declare nullable references on a compiler level, but rather than HasValue/Value you still have to do the null check and instead can bypass via the new deref operator (!)

LorenPechtel
Which is what I was talking about. I haven't had the opportunity to put it to use yet (converting an existing project is a big headache, it's something to do from the start) so I didn't remember the terms, only the ability.
paulddraper
No one thinks that non-existence can't be represented, and all but the most extreme proof languages have the possibility of runtime error.

Null is a mistake because it is (1) ubiquitously permitted in types and (2) non-composable.

(Point #2: This caused JavaScript to have "undefined" which is a second level of nonexistence)

The Maybe/Option pattern solves both these problems.

Nullable at least solves the first one.

When people criticize "null, the billion dollar mistake", they criticize the ubiquitous, non-composable form of null.

https://www.lucidchart.com/techblog/2015/08/31/the-worst-mis...

philwelch
I think it's perfectly fine to have option types, and that's exactly what languages without null references end up doing. What null references end up doing is accidentally turning all types into option types and making it impossible to have non-option types.
jayd16
This is simply inaccurate. Even Java has non-nullable primitive types.
x3ro
This comes up again and again in one form or the other, yet new languages still seem to be making the same mistake. Of all languages I've touched, Rust seems to be the only one that mostly circumvents this problem. Are there other good examples?
masklinn
> Rust seems to be the only one that mostly circumvents this problem. Are there other good examples?

Swift, Kotlin, and of course older languages of a functional bend like MLs, Haskell, Idris, Scala, …

Some are also attempting to move away from nullable references (e.g. C#), though that is obviously a difficult task to perform without extremely severe disruptions.

ronanyeah
Elm is a great JS replacement on the frontend.
the_alchemist
Scala happily accepts null as it is the bottom type for AnyRef and needed for jvm compatibility. Kotlin has a compiler check that enforces it, Scala does not.
gmartres
It's coming to Scala too: https://dotty.epfl.ch/docs/reference/other-new-features/expl...
rubyn00bie
I really love(d) Scala for introducing me to the whole idea of Optionals.

I wish for the life of me I felt like I could approach Scala at a time when it wasn't going through huge flux (I have shitty luck). I spent a good amount of time pre-version 2.10 :( and then recently went to have a look but saw Dotty (version 3.0?) coming by the end of 2020 and I was like "well, FML, time to wait a few more years and try again."

Anyone have any tips for using the Scala ecosystem effectively these days? Should I just wait for 3.0? Is it going to be a long winding road of breaking changes until a "3.11" version?

Is there a good resource for what folks are using it for these days? It seems like all the projects I used to know are ghostly on Github (but that could also be the fact it has been quite a few years, heh). Or do most folks just pony-up and use plain ol' Java libraries while writing their application/business logic in Scala?

progval
> Rust seems to be the only one that mostly circumvents this problem. Are there other good examples?

Rust is not the first one to have an Option type; it's a common feature of functional languages because they have ADTs ( https://en.wikipedia.org/wiki/Algebraic_data_type )

augusto2112
Functional programming languages have been doing it for ages. Most "newer" statically typed languages also have it (Swift, Kotlin, Rust) by default. And old languages had it bolted on (C# 8, Java 8, C++ 17).

I think at this point basically everyone has realized null by default is a terrible idea.

masklinn
> And old languages had it bolted on (C# 8, Java 8, C++ 17).

C#: actually true, you can switch over to non-nullable reference types

Java 8: meeeh, it provides an Optional but all references are still nullable, including references to Optional. There are also @Nullable and @NotNull annotations but they're also meh, plus some checkers handle them oddly[0]

C++17: you can deref' an std::optional, it's completely legal, and it's an UB if the optional is empty. Despite its name, std::optional is not a type-safety feature, its goal is not to provide for "nullable references" (that's a pointer), it's to provide a stack-allocated smart pointer (rather than have to allocate with unique_ptr for instance).

[0] https://checkerframework.org/manual/#findbugs-nullable

deepaksurti
Swift with optional and optional chaining. [1]

[1] https://docs.swift.org/swift-book/LanguageGuide/OptionalChai...

hawkice
Haskell, notoriously. I believe it pioneered the ergonomics of the alternatives used elsewhere.
cmrdporcupine
AFAIK Standard ML predates Haskell and it has an option type.
dunefox
ML is even older than SML and has algebraic data types.
davidgay
The reason they come up again and again is that it's hard to design an imperative language without them (try, assuming you want to provide generic user-defined data structures that allow for cycles).

As a result, calling them a "mistake" is reasonably dishonest, as it implies there was an obvious, better alternative.

tene
Can you give some more details on what the design problem is here?

It seems to me that nullable references are isomorphic to having an option type with non-nullable references, but prevent accidental unchecked dereference. What are some of the difficulties that you'd expect to come up if you took an imperative language with nullable references and replaced them with options of non-nullable references?

davidgay
I don't consider 'option' types to have interesting semantic differences with nullable types. YMMV.

But beyond that, the absence of nullable references (really, a valid default value for every type) is a problem for record/object/struct initialisation - you either have to provide all values at allocation time, or attempt to statically check that the object is fully initialised before any use - Java has rules to that effect for 'final' fields, and they are both broken and annoying (less broken rules would likely just be more annoying).

tene
The difference is that you can't accidentally use an option as a pointer without checking it first, and when your APIs specify a non-nullable pointer you can rely on the callers to have checked for null.

When you're reading or writing a function that accepts a non-nullable reference, you never have to worry about whether the argument is null or not. It's easier to get right, constrains the scope of certain types of errors.

If you get things wrong, and unwrap the option without checking, you get an assert failure at that location, rather than potentially much later on when the pointer is used.

The whole point is that Option<&Foo> replaces nullable &Foo, so your record/object/struct member is Option<&Foo> and the default value for it is None. Option<&Foo> even has the same runtime representation for nullable &Foo, as Option<&Foo> uses NULL to represent None.

It's just a different way of representing nullable references, but with semantics that make it easier to track null-checked vs nullable references, impossible to accidentally get it wrong and derefence a nullable pointer you mistakenly assumed was already checked, and better errors when you do make mistakes.

scientific1
While I totally agree with everything you’re saying, I think they are right about it being annoying to initialize structs/records when all fields must be defined upfront. For one, it becomes harder to incrementally build a record in generic way. And if you decide to make a bunch of fields optional, then that optionality is carried with it forever, long after it’s obvious that the data exists for that field. Those are legitimately annoying things to deal with.

To avoid that annoyance, you almost have to rethink the problem. You can’t do it the imperative way, at least not without all that pain. Instead, if you don’t yet have the data, you should simply assign the field with a function call or an expression which gets that data for you. In other words, the record initialization should be pushed to a higher level of the call graph. If you do that, then every record initialization is complete.

Other solutions are more language-specific. TypeScript has implicit structural typing, so incremental construction is pretty easy. You just can’t try to tell the compiler that it belongs to the type you’re constructing, unless it actually does include all the necessary data.

In OCaml, you can define constructor functions which take all the data as named parameters. Since function currying is part of the language, you can just partially apply that function to each new piece of data, as you incrementally accumulate it. Then you finally initialize the record when the function is fully applied.

Suffice it to say that there are plenty of solutions to this problem.

gameswithgo
rust, f#, ocaml, latest version of c# has an option to sort of get rid of nulls, zig
ronanyeah
Elm is a great JS replacement on the frontend.
pkulak
Kotlin
jrockway
I assume two reasons, efficiency and because an efficient implementation of mutable state would have the same problem.

Right now, a single sentinel value makes a pointer null or not null (0x0 is null, everything else is not null). This is exactly how you'd implement a stricter type, like "Maybe". Encoded as a 64-bit integer, "Nothing" would be represented as 0x00000000 and "Just foo" would be represented as 0xfoo. No object may be stored at the sentinel value, 0x00000000. Exactly the same as what we have now, and provides no assurances that 0xfoo is actually a valid object.

Meanwhile, Haskell which "doesn't have null" crashes for exactly the same reason your non-Haskell program crashes with a null pointer exception:

    f :: Num a => Maybe a -> Maybe a
    f (Just x) = Just (x + 41)
This blows up at runtime when you call f Nothing, because f Nothing is defined as "bottom", which crashes the program when evaluated.

It's exactly the same as langages with null pointers:

    func f(x *int) *int {
        result := *x + 41
        return &result
    }
And the solution is the same, your linter or whatever has to tell you "hey maybe you should implement the Nothing case" or "hey maybe you should check the null pointer".

Where I'm going with this is that you need to develop entirely new datatypes and have an even stricter type system than Haskell. Maybe Rust is doing this, but it's hard. We all know null is a problem, but calling null something else doesn't make the problems go away.

dunefox
That's not the same thing as a null pointer because Nothing isn't allowed in place of e.g. integers, strings, etc. like in Java. What you're doing is defining a non-total function. Haskell, per default, doesn't perform exhaustivity checks when pattern matching, but you can enable that via a compiler flag - then it won't let you compile your example. Ocaml, for example, does that by default.
tibbe
This missed the point. The point of not that you can forget to check the null case. The point is that you can express that sometimes there's no null case.
jrockway
The "no null" case in traditional languages is just "int" instead of "*int". All values inside an "int" are valid integers.

Certainly it's problematic to use the same language primitive to mean "a pointer" and "this might be empty", but it's what people use them for in every language that has pointers (that I've used anyway).

pornel
That conflates pass by value/by reference distinction with being optional.

This means you need magic values for "maybe int" with no help from the type system. And you can't express "there's definitely an int at that address".

verttii
Personally I can't even get this non-total function to compile on Haskell:

  * No instance for (Num a) arising from a use of `+'
I don't think it's fair to claim with this example that Haskell suffers from the same null pointer exception as something like Java does.
anderskaseorg
> It's exactly the same as langages with null pointers:

Four huge differences:

1. You don’t need to pass around ‘Maybe a’ everywhere. If null isn’t expected as a possible value (which usually it isn’t), you just pass around ‘a’, and when you do use ‘Maybe’ it actually means something.

2. The Haskell compiler can, and does (with -Wall), tell you that your pattern match is non-exhaustive. You don’t need a separate “linter or whatever”. This is possible because the needed information is present in the type system, and doesn’t need to be recovered with a complicated and incomplete static analysis pass.

3. If you do this anyway, the error is thrown at exactly the point where ‘Maybe a’ is pattern-matched, not at some random point several function calls later where your null has already been coerced into an ‘a’.

4. This program is defined to throw an error; it’s not undefined behavior like in C that could result in something weird and unpredictable happening later (or earlier!).

Also, Rust optimizes away the tag bit of ‘Option’ under common circumstances; for example, ‘None: Option<&T>’ (an optional reference to ‘T’) is represented internally as just a null pointer, which is safe because ‘&T’ cannot be null.

jrockway
> You don’t need to pass around ‘Maybe a’ everywhere.

You don't need to pass pointers around everywhere. Languages with null still have value types that cannot be null.

> You don’t need a separate “linter or whatever”.

Optional compiler flags count as "whatever" to me.

> it’s not undefined behavior like in C that could result in something weird and unpredictable happening later (or earlier!)

C++ doesn't define this, but the OS does (and even has help from the CPU).

Anyway, my TL;DR is that it's easy to have a slow program that passes everything by value, or east to have a fast program that uses pointers or references. Removing the special case of null is meaningless, because you can still have a pointer to 0x1 which is just as bad as 0x0, probably. This goes back to my original answer to the question "why don't more languages get rid of null" which was "it's harder than it looks." I think I'm right about that. If it were easy, everyone would be doing it.

rowanG077
Oh boy... `-Wall` is "whatever". Please let me never look at code you have written...
temac
> Languages with null still have value types that cannot be null.

Not all languages.

> C++ doesn't define this, but the OS does (and even has help from the CPU).

That's not how it works anymore, because C / C++ front-ends interacting with the optimizers are yielding too "optimized" results. See the classic https://t.co/mGmNEQidBT

zozbot234
> Rust seems to be the only one that mostly circumvents this problem.

The Rust hype is getting ridiculous here. There are plenty of languages with non-nullable references as first-class, and optionals for the nullable case.

(...And I say this as a Rust fan myself, for what it's worth.)

samatman
This comment would be much improved with a list of those languages.

Kotlin and Swift come to mind, what are others?

_bxg1
TypeScript/Flow
wvenable
PHP 7
kragen
If we're limiting ourselves only to new languages, then nulls are statically excluded not only by Kotlin and Apple’s imitation of it, Swift, but also by F#, Agda, Idris, Elm, and (sort of) Scala. But the zozbot didn't seem to be talking only about new languages, so Haskell, Miranda, Clean, ML, SML, Caml, Caml-Light, and OCaml are also fair game. (It wouldn't be hard to list another dozen in that vein.) Moreover I think you could sort of make a case for languages like Prolog and Aardappel where you don't have a static type system at all, much less one that could potentially rule out nils, but in which the consequences of an unexpected nil can be much less severe than in traditional imperative and functional languages like Java, Lua, Python, Clojure, Smalltalk, or Erlang, which more or less need to crash or viralize the nil in those cases.
samatman
Good list.

I've found the consequences of a nil type are less severe in dynamic languages, where all variables have the Any type, since nil is just one of the options one needs to account for.

Static languages where everything is nullable are reneging on the promise; you say something is a String but that just means Option<String>, and it saps a lot of the reasoning power which static typing should give.

nielsbot
`let s:String` is not the same as `let s:String?` in Swift, at least. (Nor in TypeScript)
kragen
Right, that's why Swift is in the lists above. TypeScript would've been a good addition, I just didn't think of it.
kragen
Concur.
philwelch
Even in dynamic languages, the consequences can be pretty bad. For example, I've seen lots of Ruby bugs where things end up being unexpectedly `nil`, but I haven't seen as many Python bugs where things end up being unexpectedly `None`.

How does this happen? Well, in Ruby it's a lot more normal to just return nil. For example, consider the following code snippet:

  [][0]
In both languages you are trying to examine the zeroth element of an empty array (or list, as Python calls it). In Ruby this evaluates to nil. Python throws an IndexError. So in Python, if you have a bug where you address an array with an invalid index, it manifests as an error in how you're indexing the list. Ruby silently returns nil, and the only actual error backtraces you see are when you actually try to call a method on this nil later on, which might not be anywhere near where your program messed up the array indexing.
perl4ever
That seems (although I don't have experience with either language) straightforwardly correct treatment in Python and wrong in Ruby, and the problem seemingly should be attributed to [lack of] range checking, not nil/null.
philwelch
Sure; in this case, range checking provides an alternative behavior to generating a null reference. But Ruby has other places where it generates null references more promiscuously than Python. Java does too. If you take every use case that could generate a null reference and instead behave differently in that situation you’ve eliminated null references, and Python has largely done so despite having a None type.
amelius
Imho, Rust is an awkward language because it positions itself as a systems language but it makes low-level stuff more difficult (there's even a book teaching how to implement doubly linked lists in Rust [1]), hence prone to mistakes. At the same time, people are using Rust to build non-systems programs, where other languages would be more appropriate (e.g. those with garbage collectors). I don't think it is a good idea that Rust is promoted as the language that will rule them all; in my opinion, it is still a research language.

Linus Torvalds said the following about Rust [2]:

[What do you think of the projects currently underway to develop OS kernels in languages like Rust (touted for having built-in safeties that C does not)?]

> That's not a new phenomenon at all. We've had the system people who used Modula-2 or Ada, and I have to say Rust looks a lot better than either of those two disasters.

> I'm not convinced about Rust for an OS kernel (there's a lot more to system programming than the kernel, though), but at the same time there is no question that C has a lot of limitations.

[1] https://rust-unofficial.github.io/too-many-lists/

[2] https://www.infoworld.com/article/3109150/linux-at-25-linus-...

mhh__
Rust is a force for good but I think Andrei Alexandrescu was right when he said Rust feels like it "skipped leg day" (in the sense that it has its party piece and not much else) - from the perspective of the arch metaprogrammer himself at least.

Rust is obviously good for safety but for everything else (to me at least) it seems unidiomatic and ugly, admittedlty I've never really sunk my teeth into it (I've read a fair amount into the theory behind the safety features but never done a proper project)

zozbot234
Rust has procedural macros. What else do you need for metaprogramming?
heavenlyblue
>> unidiomatic and ugly

I am not sure I would take anyone seriously who thinks this is a valid point to make about a programming language.

mhh__
That was my personal opinion, unrelated to the first paragraph.
Stratoscope
> ...Andrei Alexandrescu was right when he said Rust feels like it "skipped leg day" (in the sense that it has its party piece and not much else) - from the perspective of the arch metaprogrammer himself at least.

Mind if I ask what that means? It seems like an interesting observation, but there are a couple of bits of terminology I don’t understand, like "leg day" and "party piece".

Any clarification would be appreciated, thanks!

amelius
Perhaps this link would explain it more clearly: [1].

Leg-day is bodybuilding terminology, and refers to the day of the week when the bodybuilder is supposed to be training the leg muscles. According to the meme, nobody wants to train the legs because they show the least.

[1] https://www.jonathanturner.org/2016/01/rust-and-blub-paradox...

Stratoscope
Thank you, that is very informative!

> nobody wants to train the legs because they show the least.

It reminds me of the story about how Google drops products because no one wants to maintain an existing product. That would not show "impact", not like launching a new product would, and impact is how you get raises and promotions.

zozbot234
> there's even a book teaching how to implement doubly linked lists in Rust [1]

Doubly-linked lists are an awkward example because the "safety" of a doubly-linked list as a data structure involves fairly complex invariants that Rust can't even keep track of at this point, much less check independently. These things are exactly why the unsafe{} escape-hatch exists and is actively supported. But just looking at the amount of unsafe code in common Rust projects should suffice to figure out that this is not the common case, at all.

> At the same time, people are using Rust to build non-systems programs, where other languages would be more appropriate (e.g. those with garbage collectors).

Garbage collectors are good for one thing, and one thing only: keeping track of complex, spaghetti-like reference graphs where cycles, etc. can arise, perhaps even as a side effect of, say, implementing some concurrency-related pattern. Everything else is most likely better dealt with by a Rust-like system with optional support for reference counted data.

That's without even mentioning the other advantages that a Rust-like ownership system provides over a GC-only language. See e.g. https://llogiq.github.io/2020/01/10/rustvsgc.html this recent post for some nice examples.

empath75
I’m a rust fan, but garbage collection is about removing having to think about memory management from the developer almost entirely, not about performance.
bitwize
Value and move semantics do the same thing and work in 90% of the cases a GC does. For the remainder there's ARC.
tsimionescu
No, having to think about value and move semantics is extra overhead you take on. It's better when the compiler can help you catch this, like in Rust, but it still forces you to structure your program a certain way and to constantly think about incidental details like ownership.
empath75
If you think rust eliminates the need to think about it in the way that python does, I don’t know what to tell you.
bluejekyll
You might be reading too much into that comment.

Python allows you to do something with memory that Rust has made it a priority to be more concerned with, and that’s sharing.

C also has easily shared memory, much like Python. Point being that Rust wants to make sure that your references are safe to share, whereas Python wants you to share as much as possible and makes it safe by not allowing multiple threads to interact with it.

These are different trade offs, but Rust does allow you to forget about memory management in the same way Python does, but forces you to think how it’s being shared.

That’s the added cost over Python and the extra thought that goes into using the language.

perl4ever
I don't do the sort of programming recently where it matters, but when I read the debates on GC on HN, I think, why not a language where there is a GC, but it is "cooperatively scheduled" - you explicitly invoke it for a fixed amount of time. Wouldn't that be the best of both worlds?
jayd16
GCs are also good at compacting heaps.
zozbot234
Perhaps, but at pretty severe cost. Your heap must be structured in a way that the tracing routine can make sense of (and the consequences of this involve considerable waste and inefficiency in practice - lots and lots of gratuitous pointer chasing), and the compacting step itself involves a lot of traffic on the memory bus that trashes your caches and hogs precious memory bandwidth.

Forget it. Obligate GC is a terrible idea unless you really, really, really know what you're doing.

amelius
> Doubly-linked lists are an awkward example because the "safety" of a doubly-linked list as a data structure involves fairly complex invariants that Rust can't even keep track of at this point.

Perhaps it's just me, but I'd like to assume that my language does not treat any algorithm found in a basic algorithms course (e.g. Sedgewick) as awkward.

elcritch
How times have you used doubly liked lists outside a CS101 programming exercise? Even if you did, it’d usually be trivial to implement an array index version or just use unsafe. Basically it seems you give up almost nothing for memory safety.
dpbriggs
For sure, you can avoid awkwardness by not statically verifying memory usage & invariants (c++, etc) or using a GC'd language. Rust's ownership and borrowing rules are limited, but simple enough for someone to internalize them quickly.

There's a pretty vast difference between human simple and computer simple. Rust requires that you prove memory safety, or use unsafe. That's a different problem than just informally ensuring invariants are met.

You could probably pull in more advanced type theory research for more nuanced ownership, but I'd bet the language would be harder to understand overall (Haskell disease).

jayd16
Eh, I have no problem with the idea that data structures that work in C are awkward in different paradigms. Many data structures are awkward in functional programming. Lots of things are awkward in C that are easier in other languages.
seventh-chord
"Making everything a reference: The Billion Dollar Mistake" is the talk I want to see
kragen
You may be interested in http://canonical.org/~kragen/memory-models then. I don't think it's necessarily a mistake but it's definitely taken for granted far too much.
gameswithgo
there are a few completely different ways to interpret this, can you explain?
seventh-chord
in languages like c, rust or go, where you can put arbitrary data on the stack, it seems to me as if such issues are less common because you dont have to worry about initializing pointers and allocating memory unless you actually want to put something on the heap. Thus if you make everything a reference in your language its no wonder you run into issues like null-pointers more often
DaiPlusPlus
With stack allocation you then encounter problems with object lifetime. Rust solves this problem by binding references to scope, and Go solves this by invisibility changing an allocation to the heap (and uses ref-counting? I think?).

I wish C had a feature that would let you allocate something on the stack and then return to the parent stack frame without popping the stack-pointer - that would be handy for self-contained object-constructors.

zozbot234
Go uses fully-general GC, not reference counting. Obligate reference counting is used in other languages such as Swift, probably with worse throughput than obligate tracing GC.
jayd16
Impossible because you'd have other things on the stack you did want to pop but I get what you mean. Instead what happens is the parent allocates space on the stack and then calls a method to fill it... but, you know...that's just normal stack allocation.

Maybe you could just write over the stack at the end of the function when you don't need alloca...wait no...that's a return value.

cm2187
Plus also doesn’t the stack need to be small to fit into the CPU cache?
seventh-chord
There is absolutely no requirement from the hardware that the stack be any particular size
kragen
If you start allocating multi-kilobyte objects in your stack frames, they are not going to fit into L1.
cm2187
I am not thinking requirement by rather performance wise.
DaiPlusPlus
On the Windows desktop, the default stack size is 1MB. In IIS-hosted applications the default stack size is reduced to 250KB due to the popularity of the now-outdated programming trope of "one thread per request (per-connection)". On x86 Linux the default stack size is 2MB - which seems generous.
jayd16
Isn't that just how return values work or did I woosh a joke?
giulianob
Regardless of whether it's on the stack or heap the point still stands. If all your objects are randomly allocated then an array is just references to those objects and will start out null. If you're using value types then your array of objects will never be null (empty instead) and you will benefit from CPU caching the data.
tsimionescu
Fortunately, objects in modern GC are almost guaranteed to be layed out sequentially in memory of they are allocated sequentially or if they are referenced sequentially and a GC pass has run, because of the way copying GCs work. The much bigger problem is the memory and CPU overhead of storing pointers and following them, though that should be mitigated somewhat by the prefetcher.
dorfsmay
Everything in Python is a reference, and there's no null pointer issues.
auxym
I've certainly had some "None" errors in Python.

I think the difference comes from dynamic vs static typing. In Python, you sort of get into the habit of "defensive" programming: checking inputs to your function, catching Nones, etc.

In java, you tend to rely more on the type system. If it typechecks/compiles, there's a good chance it's OK. That is, until you get a null value that's not handled.

That's the root issue I think: If null is an acceptable value per the type, then the same type system should force you to handle it. As do the type systems in ML languages for option types, for example.

JakobProgsch
The first line is why I'm not 100% convinced of the severity of this mistake compared to the alternatives. The problem fundamentally is the use of magic values/numbers to represent the concept of "no value". You don't need explicit language support to have that concept and the bugs it causes. I guess having that as an intrinsic concept in the language makes it more likely that people use it badly. On the other hand debuggers etc. also intrinsically understand this and segfaults due to null pointers are usually very easy to localize once you see them. On the other hand if a "bad programmer" introduced their own magic non-value in a supposedly safe language, debugging that becomes way more confusing.
andrepd
No, that's not the "fundamental problem". The fundamental problem is a type system that lies. A "pointer to string" is not actually a pointer to a string, it's a pointer to a string or to nothing. If your api returns a pointer of the latter type, it should signal this by making the return type "maybe-pointer to string" (although it has the same memory representation as "pointer to string"). Then, if the user tries to dereference a maybe-pointer (that is, to use a maybe-pointer as a pointer), the type system can statically catch this and make it a simple type check failure compilation error. The user must first check if it's null through a function that casts a maybe-pointer to a pointer.

Nothing about this precludes the usage of sentinel values.

brudgers
Everything in Python is an object. In Python, containers are objects that reference other objects.

https://docs.python.org/3/reference/datamodel.html

https://docs.python.org/2.0/ref/objects.html

dorfsmay
How are name bindings different than references?

    >>> a=[2, 3, 1]
    >>> b=a
    >>> id(a)
    139731931982216
    >>> id(b)
    139731931982216
    >>> b
    [2, 3, 1]
    >>> b.sort()
    >>> del(b)
    >>> a
    [1, 2, 3]
    >>>
fulafel
Depends what you mean by reference. In Python they are indeed synonymous. C++ references are different in that the binding / object identity can be changed through the reference, eg when you pass a reference to a function.
brudgers
In Python, lists are container objects. Container objects reference other objects. In the first line object a references objects 2, 3 ,and 1 (and any other objects in a's object heritage).

2, 3 and 1 have id's. That's what "everything is an object" fleshes out to in Python. But they don't reference other objects because they are literals. The value of 2 is also its name.

kragen
If you've never seen this message, you haven't been programming in Python for very long:

    AttributeError: 'NoneType' object has no attribute 'foo'
Not to mention UnboundLocalError and cases of AttributeError that stem from trying to use attributes before they've been initialized. Some of these have slightly better ergonomics than Java’s pernicious null initialization, for example by crashing your program earlier, but the upshot is that everything you do in Java that will crash with a NullPointerException will also still crash your program in Python.

Oh, I guess except for shadowing an outer-scope variable with a local that you never initialize. That just gives you the wrong answer in Python, because there exists no local without an initialization. But it's a pretty marginal case.

x3ro
Can you elaborate? I can't remember the last time I thought "oh darn it why is this a reference", but I can think of a billion problems I've had with nulls in jvm languages
emsy
Cache incoherency, which will cost us more and more performance as CPUs will improve slower in the future.
decafbad
C.A.R Hoare couldn't foresee consequences 55 years ago. That's a small mistake. We should blame language designers who didn't bother to handle the problem after it's been obvious.
littlecranky67
Lot of mainstream languages nowadays support non-nullable types, i.e. TypeScript and C# (taken from F#).
decafbad
I like Kotlin's approach most but I'm just saying we should stop accusing Hoare.
rzwitserloot
This old chestnut again.

There is an inherent problem in designing processes and writing code to capture them: The notion of not-a-value.

There are great many ways to solve them. The most common ones are 'null' and 'Optional[T]'. Neither just makes the problem magically go away. If a process is designed (or a programmer writes it) thinking that 'ah, well, here, not-a-value cannot happen', but it can, then.. you have a bug.

Some language features might make it possible to help reduce how often it occurs, but eliminate it? I don't think so.

Imagine, for example, in an Optional based language, that you just map the optional to a lambda to execute on the optional, and the behaviour of the optional is to then simply silently do nothing if it's optional.none. That'd be a much harder to find bug than a nullpointer error. (errors with stack traces pointing at the problem are obviously vastly superior to mysterious do-nothing behaviour with no logs or traces of any sort!).

Some other creative solutions:

* [Pony](https://www.ponylang.io/) tries to be very careful about registering when an object is 'valid' and when it isn't, and when you write code, you have to say which state the objects you interact with can be in. This lets you avoid a lot of the issues... but pony is quite experimental.

* In java you can annotate any usage of a type with nullity info, and then compiler linter tools will simply tell you that you have failed to take into account a potential null value. You are then free to ignore these warnings if you're just writing test code, or know better. Avoids clogging up the works with optional, but as the java ecosystem shows, you can't just snap your fingers and make 30 years of massive community effort instantaneously instantly be festooned with 'might-not-hold-a-value' style information. At least the annotation style gives the hope of being backwards compatible (to be clear, optional, for java? Really bad idea).

* in ObjC, if you send a message to a null pointer, it silently does nothing, in contrast to virtually all other languages with null types where attempting to message a null ref causes an error or even a core dump.

* Just write better APIs. Have objects that represent blank state (empty strings, empty collections, perhaps dummy streams which provide no bytes / elements, etc). For example, in java: Java's map (a dictionary implementation) has the `.get(key)` method which returns the value associated with that key, and returns `null` if there is no such value. About 6 years ago another method was added in a backwards compatible fashion (so, all java map implementations got this automatically): `getOrDefault(key, defaultValue)`. This one returns the provided default value if key isn't in the map. You'd think optionals provide a general mechanism for this, but, in scala, you have both: There's `someMap get(key)` which returns an optional, so to get the 'give me a default value' behaviour, that'd be `someMap.get(key).getOrElse(defaultValue)`, but maps in scala also have the java shortcut: `someMap.getOrElse(key, defaultValue)`. Sufficient thought in your APIs mostly obviates the issues.

null is not a milion dollar mistake. It is a solution to an intrinsic problem with advantages and disadvantages over other solutions.

melling
I remember tracking down the null silent message failure issue in the early 1990s on NextStep. Then again almost 2 decades later on the iPhone. Personally, I’m not a fan of silent failures.

IMHO, allowing for non-nullable variables is a huge improvement in language design. Adding boilerplate annotations is an ugly way to handle it. Optimize for the common case and make variables non-nullable by default.

strictfp
In my experience, forbidding null refs usually only results in getting null objects instead, such as empty strings or empty collections. Thats not bad, but might not solve such a silent error message, you'll still end up with an empty message in the end. In order to solve it proper, I'm thinking you'd might want to go further and have more expressive type constraints, like Ada subranges.
temac
The mistake is being nullable/optional by "default", that is with the least amount of effort for programmers using such a language. Or worse only ever nullable (like Java is except for its built-in scalars I think?).

There is obviously a need about having optional things, but this is not the common case, so this should not be the default and even less the only solution. And it should enforce handling the absent case.

"null" is a shortcut for talking about solution which does nothing of that (and is even UB in case of mistake in some languages). Billion Dollar Mistake is generously low; probably the cost is already Multi-Billion Dollar, and counting.

kerkeslager
> There are great many ways to solve them. The most common ones are 'null' and 'Optional[T]'. Neither just makes the problem magically go away. If a process is designed (or a programmer writes it) thinking that 'ah, well, here, not-a-value cannot happen', but it can, then.. you have a bug.

> Some language features might make it possible to help reduce how often it occurs, but eliminate it? I don't think so.

On the contrary, you can 100% eliminate it by forcing null handling at compile time with your `Optional` type. Haskell and some other strongly-typed languages do this (but they call it Maybe).

The way to do this in a C-syntax-ish language would look something like this:

    Optional<int> increment(Optional<int> i) {
        // return i + 1; would throw an error at compile time,
        // because Optional doesn't implement the + operator

        // i.applyToValue would throw an error at compile time
        // if you didn't handle both possible cases
        return i.applyToValue(
            ifNull: (void) => { return new Optional<int>(null); },
            ifValue: (int i) => { return i + 1; }
        );
    }
This is syntactically a bit heavy, partly because I was a bit more verbose than a real implementation would need to be, for clarity, and partly because C-style syntax doesn't do this well. Languages that support this generally have some syntactic sugar to make it a bit more terse.

I've argued before on HN that the benefits of strong static typing are overstated, but this is a case where strong static types really do completely eliminate an entire category of errors. Given how common these errors are, not using stronger types in this situation for popular languages has absolutely been a billion dollar mistake.

masklinn
> Haskell and some other strongly-typed languages do this (but they call it Maybe).

Most call it either Option or Optional FWIW. `Maybe` is the term used by Haskell and its derivatives (like Idris or Elm).

tsimionescu
The problem GP was raising is on the other half of Optional: when you do finally need an int.

Say, you want to do

    arr[*increment(maybeInt)] //error: can't dereference Optional<int>
Now, if you as a programmer don't think increment(maybeInt) can actually return None in your particular case, you will probably do the minimal work to convince the compiler to let it go, say

    matchOptional( 
       increment(maybeInt),
       someInt => {return arr[someInt]}, 
        () => { /*never happens*/ return 0; }) 
(using a slightly simpler notation that your version, since I'm on mobile)

Now, if you were wrong and you do get a Nothing, instead of seeing a nice stack trace, you have an absurd 0 running forward. You could improve this using an assert() but this is what managed languages already do with the NullPointerException&friends.

The more interesting thing to show is all the code that does not deal with optional and that is now magically free of any possibility of null errors. But the programmer is still responsible for correctly treating the moment they need to go from optional values to non-optional, and here Optional is essentially just more in-your-face than null (which is valuable, don't get me wrong).

The only example I know of where a language feature truly completely eliminates a category of errors is managed memory, which does not replace memory errors with any other more-or-less equivalent error case.

I personally very much doubt NULL is a significant source of errors in managed memory languages. It's not nothing, but they are some of the easiest bugs to track down.

int_19h
This is a corner case, though, and one that is itself a code smell (i.e. in well-written code, it should be very rare). Having implicit null references, and implicit null checks on dereference, optimized for rare a corner case to the detriment of safety in typical code patterns, is a bad thing.

And it definitely is a significant source of errors in managed memory languages, from my experience in C# and Python. It can also be pretty tricky to track down, when the code producing the null happens to run long before the code dereferencing it.

tsimionescu
> Having implicit null references, and implicit null checks on dereference, optimized for rare a corner case to the detriment of safety in typical code patterns, is a bad thing.

Yes, I completely agree. I was just trying to point out that Optional is not a 100% air-tight solution, I think the problem of handling missing values is just too general to actually have a 100% solution.

Still, the perfect shouldn't be the enemy of the good, and default non-nullable definitely helps in most cases.

> It can also be pretty tricky to track down, when the code producing the null happens to run long before the code dereferencing it.

Here I don't agree. If the code producing the null is the problem, then you would have the same problem with Optional. Optional helps when code consuming the null forgot to handle the null case. If you're writing C# or Python and you get an NPE, and null was actually a valid value, you don't need to track down the source of the null, you just need to handle the null case in the exact case where the exception occurred (and possibly further downstream).

int_19h
> Here I don't agree. If the code producing the null is the problem, then you would have the same problem with Optional.

The crucial difference is that most reference-typed variables wouldn't be optionals in a language where references can't be null. So in practice you get rid of a lot of problems, because the type checker catches the use of null where it's simply not a valid input. In C# and Python, because every reference is potentially nullable, you have to aggressively check at every boundary where your contract is that it's not actually null. If you ever forget, and your caller passes null, then you end up with this "how did this get there?" problem.

Conversely, with optionals, you also have to handle the null case if you're at the boundary, because past the boundary you'd just use a non-optional type to propagate that value further. With implicit nulls, the boundary is entirely in your head - the language won't do anything to help you enforce it.

kerkeslager
If it really can't ever be null, then it should just be an int, not an Optional<int>. The entire reason that it is an Optional<int> is that it CAN be null.

In this hypothetical language, not initializing an int is a compiler error, assigning null to an int is a compiler error, etc. If it's an int it literally cannot be null.

What ends up happening in practice is that the null is handled close to where it's created, and the rest of the code passes around an int you know isn't null, because people don't feel like passing around an Optional<int> and being forced to check it everywhere.

Sure, you can intentionally write code that does the wrong thing in any language, but the "return 0;" would be a very obvious error in even cursory code review.

tsimionescu
There are normal cases where this can happen. For example, a map should normally return an Optional<ValueType> when you try to retrieve a key's association. However, there may be special cases where you know that a key is present (maybe it is a constant map, maybe you just set the value of that key etc).

I do agree that these cases are much rarer than the cases where a value is either always there, or the cases where a value really can be missing. I was only pointing out that Optional doesn't eliminate 100% of null errors, just 99.9% of them.

kerkeslager
> For example, a map should normally return an Optional<ValueType> when you try to retrieve a key's association. However, there may be special cases where you know that a key is present (maybe it is a constant map, maybe you just set the value of that key etc).

Both of these cases are still pretty big code smells.

1. Just don't use constant maps. Instead of doing this:

    const myConfig = new Map<string, int> {
        "height": 72,
        "weight": 160,
    };

    [...]

    var height = myConfig["height"].matchOptional(
        ifNull: () => 0,
        ifValue: i => i,
    );
...use a constant structure (with an anonymous type):

    const myConfig = struct {
        height: 72,
        weight: 160
    };

    var weight = myConfig.weight;
You can verify whether height or weight are null at compile time this way[1].

2. If you just set the value of the key, instead of doing this:

    dictionary[word] = getDefinition();
    let definition = dictionary[word].matchOptional(
        ifNull: () => "",
        ifValue: d => d,
    );

...do this:

    let definition = getDefinition();
    dictionary[word] = definition;
I'm aware that I'm playing fast and loose with the syntax of our pseudo-language, but note that avoiding the optional will is terser and simpler than using the optional and eating the null case--this is true in most cases in most strongly/statically-typed languages. Not only do you learn to lean on the type system in a strongly/statically-typed language, but if the syntax is well-designed, it makes it easier to lean on the type system than to not lean on the type system.

[1] You may say, but what if I'm loading from a file? The common pattern is to load a config from a file as a map, and then load it into a struct, setting defaults, like so:

    const defaultHeight = 72;
    const defaultWeight = 160;

    JsonObject configJson = json.loadFile("config.json");

    const config = struct {
        height: configJson["height"].matchOptional(
            ifNull: defaultHeight,
            ifValue: v => v.asInt(notInt: v => throw Exception("Invalid height \"{}\" in config.".format(v)))
        ),
        weight: configJson["weight"].matchOptional(
            ifNull: defaultWeight,
            ifValue: v => v.asInt(notInt: v => throw Exception("Invalid weight \"{}\" in config.".format(v)))
     };
You eventually hit cases with user data where you can't handle it (hence the throwing exceptions) but this pattern allows you to fail early, and with descriptive error messages.
twic
> Imagine, for example, in an Optional based language, that you just map the optional to a lambda to execute on the optional, and the behaviour of the optional is to then simply silently do nothing if it's optional.none. That'd be a much harder to find bug than a nullpointer error.

It's worth noting that this is only possible if the operation you're mapping over the optional can have side-effects. Without side-effects, mapping over an optional always does nothing, in a way - all the difference is in the value returned.

Adding that constraint does make programming pretty painful, though.

JoshMcguigan
The goal of `Optional[T]` is not to "make the problem magically go away", in fact that is almost the opposite of the goal.

Optional[T] exists to make it very obvious when a value is nullable. Having non-nullable types as default, with Optional[T], allows a developer to model a system more accurately. This is helpful both to the compiler as well as anyone else who reads/maintains that code.

> Imagine, for example, in an Optional based language, that you just map the optional to a lambda to execute on the optional, and the behaviour of the optional is to then simply silently do nothing if it's optional.none. That'd be a much harder to find bug than a nullpointer error. (errors with stack traces pointing at the problem are obviously vastly superior to mysterious do-nothing behaviour with no logs or traces of any sort!).

This is just one of the things a developer could decide to do when faced with an optional which is none. It is up the language design to make it easy to express this behavior (or any other behavior they might choose) without hiding it.

agumonkey
Not contradicting, just that there a slight benefit in reifying an issue as it opens for notation and operators to simplify it. Maybe monad or option chaining are more than making the thing obvious, it make them half disappear.
strictfp
Sure it's up to the language design, but in practice a `None` gets a similar treatment as an empty collection, usually effectively short-circuiting remaining calculations. As the parent poster pointed out, this might either be the behavior you want, or actually mask the error, depending on the situation. By this logic, optionals aren't better than null refs, just different. The same argumentation holds for exceptions vs optionals.
int_19h
In practice, usually any operation on optionals with such short-circuiting behavior must be explicit. For example, for member access, instead of foo.bar, you get something like foo?.bar - and that ? right there tells you all you need to know.

Same thing with exceptions/error types. With exceptions, propagation is implicit, but with error types, you usually have to use some explicit proceed-or-propagate operator.

JoshMcguigan
In my experience, languages with strict non-null guarantees (and optional types), do the exact opposite of "mask the error". If anything, they are sometimes faulted for being too verbose.

The idea is, by explicitly marking things which can be null (wrapping them in an Option[T], for example), you can be sure that everything else is not null. This alone relieves the developer of a large cognitive load.

Further, the language can provide syntax to make handling optional types obvious without being painful. Rust match statements are one example of this.

Can you provide a specific example of how using an optional type makes a potential "missing-thing" type of bug harder to see?

mlangenberg
I was expecting someone to mention the Crystal programming language.

In Crystal, types are non-nilable and null references will be caught at compile time.

https://crystal-lang.org/2013/07/13/null-pointer-exception.h...

I certainly recognize that many bugs in Ruby programs announce themselves as `NoMethodError: undefined method '' for nil:NilClass`. So to be able to catch that before releasing code is a very welcoming addition in my opinion.

teh_klev
InfoQ has some gems, but their video content presentation is terrible (tiny box, or full screen):

https://www.youtube.com/watch?v=YYkOWzrO3xg

RickJWagner
No comment on the Null References, but I will say I love the time-index provided for the video. I wish every video had these!
olliej
Null termination is still easily much worse. At least the general case of null dereferences today (less so earlier) is a page fault.
agumonkey
Should every domain have a Nil element instead ?
fhars
No, obviously not. Every domain having a Nil element is exactly the problem null references have introduced (at least for the call by reference parts of the affected languages).
agumonkey
Null is a single nil for all, I meant having a null per domain would force people to think of what it means to have nothing in that field and handle it. Maybe I'm too naive.
erik_seaberg
This is one of Go's weirdest features. When you cast nil to an interface, you pay for extra storage so the runtime can do method dispatch on what type of object you don't have, even though every implementation is likely to panic immediately.
augusto2112
Sometimes you don't want to allow a value to be null at all, but with null references you can't represent that at the language level.
agumonkey
But for numbers, a zero is not considered null, because it was handled in the operators rules.
fhars
Numerical zero has nothing to do with the issue discussed here. What you are proposing is to add another “number” to types like int and float that results in the program crashing whenever you try to add it to another number.
reificator
> What you are proposing is to add another “number” to types like int and float that results in the program crashing whenever you try to add it to another number.

There's already division by zero and NaN to trip you up in IEEE754.

goatlover
Or there's NaN which just results in NaN and doesn't equal itself. No need to crash.
agumonkey
I think it does
strictfp
It does! It's the numerical "null object", just like the empty string and empty collection.
masklinn
> I meant having a null per domain would force people to think of what it means to have nothing in that field and handle it.

In what way would it change anything? The "billion dollars mistake" is that because "nothing" is part of every type, any value you get could really be missing, and you have to either hope for the best (and die like the rest) or program ridiculously defensively.

Having a magical sentinel per type would have the exact same issue, namely that "nothing" is part of the type itself, and so you can never be sure at compile time that you do have "something".

That's what opt-in nullability (whether through option types or language builtins or a hybrid) changes, by default if you're told you have an A it can only be a valid A, and if you're told you might have an A you must either check for it or use "missing-safe" operations.

loopz
What is required is an optional "No" element. Then you can say, I have a "No" Problem, and people will think you're joking and remain on their happy path.
microcolonel
Null references are not a mistake, they make perfect sense. Letting nullable types be dereferenced directly is the mistake.

Null references are at the core of a great number of sensible datastructures, and they're a natural fit for conventional computers.

int_19h
There are two separate concepts here that often gets conflated.

There's null reference in a sense of a special pointer value (usually all bits set to 0) that means "this doesn't point to anything". That's a useful low-level tool that allows for compact representation of many important data structure.

And then there's null reference in a sense of type systems. To be more specific, "null reference" here is really a shortening of "every reference in the type system is implicitly nullable". And that is the billion dollar mistake.

An explicitly nullable reference type that requires explicit check on dereference, or option types, that use null pointers under the hood, are obviously not the problem.

dooglius
> "null reference" here is really a shortening of "every reference in the type system is implicitly nullable"

I don't think this is quite accurate, there are definitely cases where non-null pointers are required (e.g. dereferencing). It's more correct to say that the type system does not explicitly indicate whether a pointer might be null or not.

microcolonel
Just consider all pointers null until proven otherwise, shouldn't be that hard to do something like this in static analysis. Even if a reference is non-null, you still have to wonder if it's valid.
throwaway2048
To put it a bit more compactly, why is Boolean logic "True, False, Null"
Matthias247
Out of all possible gotchas in programming languages I still find null pointers the easiest one to discover and fix. You directly see when and where it happens, and the fix is usally straightforward.

Compared to that invalid pointers (stale references) are a lot more painful, since programs might continue to work for a while. Managed languages do at least prevent those.

Multithreading issues are imho the biggest pain points, since they are introduced so easily and often go unnoticed for a long time. The amount of languages that prevent those is unfortunately not that big (Rust plus pure singlethreaded languages like JS plus pure functional languages).

smt88
> You directly see when and where it happens, and the fix is usally straightforward.

This is not true in most dynamic languages, especially ones where I/O is not typed. You have to be extremely dilligent about verifying input. JavaScript comes to mind.

Supermancho
> You have to be extremely dilligent about verifying input.

That's true of all languages. Null references are a problem of low effort development. Calling it a billion dollar mistake is sensationalist hand-wringing. It accidentally highlighted how carelessly most programs are written, implying that without it developers wouldn't be checking inputs as strictly, because they wouldn't need to. Yes it's another type, but lots of languages have a nil/null and there hasn't been a demonstrative reason to pull it.

smt88
Well to be clear, most modern languages with reasonable type systems will force you to explicitly verify the type of your input. C#, for example, forces you to cast your JSON before using it. If the cast fails because you got the class definition wrong, you get an error (like a constructor error).
shitpostbot
Almost all bugs could be called the result of "low effort development", but if there's a class of bugs that simply does not need to exist, why would you want to keep it around?

The reason to pull Null is that it's easy to forget to check, and it causes bugs. Lots of bugs. Lots of wasted developer time.

The real question is if there is any reason to keep it, and I'm pretty sure the answer is no. Expressing nullability in the type is an infinitely better solution.

zeendo
> there hasn't been a demonstrative reason to pull it" is not the reason it's not "been pulled

Languages are hard to change and backwards compatibility is paramount. Hell, some languages support null just for interoperability (i.e. Scala) when they would have otherwise not allowed it when they were created.

Null isn't expressive and is historical baggage. At this point "billion dollar" is probably an understatement.

I wonder how many people that have spent significant time writing in languages that allow null and those that don't prefer having null?

I, for one, wouldn't willingly go back to a language that allows null.

perl4ever
Lots of languages have several ways to indicate an invalid value.

With SQL, I generally prefer dialects where an empty string and null are considered the same, although that may be just due to what I learned first. Various Microsoft technologies seem to tend towards multiple ways of expressing null/missing/empty.

An interesting thing about null in SQL is that the rule that any operation on a null returns null, only applies sometimes in some contexts. For instance in Oracle SQL:

select max(case when x = 'ABC' then 'ABC' end) as y from ...

...is taking the maximum of an expression which is null whenever x <> 'ABC', yet it doesn't return null if there are rows where it does = 'ABC'.

(sorry for any errors)

daxfohl
Sure, but null pointers are more ubiquitous: pretty much any line of code in most languages could have multiple null pointer exceptions. And when they blow up they do so just as blow-up-y as the others, but you just feel dumber about it. I say this as someone who recently brought down the entire online presence of one of the world's largest companies / cloud providers for over an hour: their cloud portal, their search engine, their office suite, etc., by having a null pointer bug in the CDN code get exposed by a configuration file update.
dionian
It's always easy to find where they occur. But it's not easy to find why. Its better to never allow the model to be invalid, and error immediately when it becomes invalid rather than when it causes some surprising undefined behavior later and you have to spend the effort to reason why.
int_19h
You directly see where the null dereference happens. But that's not necessarily where the problem actually is, because a null pointer can flow through a lot of code before it actually gets dereferenced. So "program continues to work for a while" is also a thing with them.

In a language like C, a null pointer can also become an invalid non-null pointer pretty easy with pointer arithmetics.

MertsA
>a null pointer can also become an invalid non-null pointer pretty easy with pointer arithmetics.

Yeah but even then it's still easy enough to see what happened when you have a pointer to address 0x0000002F or some similar small pointer.

cylon13
Easy, just add "if (thing == null) return;" to the top of the function that crapped out on a null reference and close the ticket! /s
int_19h
Sarcasm aside, this is often how it gets fixed in poor quality codebases! And if it returns a reference itself, it ends up being:

   if (thing == null) return null; 
more often than not. Which leads to an even more entertaining debuggin trip next time...
cylon13
Oh for sure. I wish I would have thought of that sarcastic quip by being creative, but in reality it's because I've seen it so many times.
Oct 08, 2019 · 2 points, 0 comments · submitted by ngaut
chmaynard
From the perspective of functional language design, I think the biggest mistake in Algol and C was actually data mutation. With scoping and shadowing (which C supports), data mutation is completely unnecessary.
Sep 16, 2019 · electrograv on Why Go and Not Rust?
That’s right, and this is confirmed by many benchmarks I’ve seen. I agree with pretty much everything in this article except the repeated claim that “Go is fast”.

Of course “fast” is relative, but I would reserve it for languages that are nearly as fast as competitors in their segment. There are too many languages very similar to Go’s ergonomics that are much faster for us to meaningfully call Go “fast”.

That said, I don’t really think that’s a problem. I think people who use Go are often just happy it’s faster than Python, and that’s okay.

My personal dislike of Go comes simply from their unapologetic[1] embrace of default nullable pointers (the “billion dollar mistake”[2]): There is very strong theoretical (and practical) ground supporting the approach of Rust/Zig/Swift/etc’s (to use algebraic data types instead) as objectively better (yielding inherently more reliable results with virtually no ergonomic compromise[3]).

In other words, in the 21st century, we know how to design statically typed languages that guarantee the impossibility of null dereference exceptions (not counting bugs in external libraries from other languages). And we can do this without any runtime performance or code ergonomics compromise!

Therefore there are no good excuses anymore for any statically typed language in the 21st century to not provide this extremely beneficial guarantee.

[1] There are no plans to fix this, ever: I’ve seen entire articles written by members of the Go team not just defending “all pointers are nillable”, but encouraging this as an idiomatic Go style of coding.

[2] https://www.infoq.com/presentations/Null-References-The-Bill...

[3] The ergonomic difficulties of Rust come from the borrow checker, not from their use of algebraic data types to replace nullable pointers.

josephg
I would add parametric enums to your excellent rant. When I first learned Go I thought it’s use of constants with iota as quite a clean approach for enums. But after spending some time with Scala, Rust and Swift, well, I was wrong. Being able to exhaustively pattern match is simply excellent. And like non-nullable types, this is a zero overhead language feature that is simple, feels great to use, and reduces bugs.

It feels like a real step backwards using languages without parametric enums. My litmus test when learning a new language involves porting across some plain text operational transform code. The go code came out about 40% larger than the rust and swift implementations for this reason. It was also much uglier and harder to read. Like, those extra lines were pure overhead.

Eg this rust code is beautiful: https://github.com/josephg/textot.rs/blob/bb14b4b483e7dace67...

And that’s prettier and about as performant as this C implementation of the same function (I think I somehow lost the Go code - but it wasn’t much better than this): https://github.com/ottypes/libot/blob/902470a22d3a99d9b776ce...

The equivalent javascript code (my go-to language!) is larger than the equivalent swift / rust code and, last time I checked, about 8x slower. Most of the gap in readability is this one beautiful feature!

steveklabnik
Has there been a statically typed language with no null and no generics? I'm pretty sure that every language I've used that implements an Option-like type has it be generic. Maybe TypeScript, where you can have "number | null"?
ummonk
Go has generics (e.g. in its data structures). They just aren’t usable by the programmer. The same could have been done for optionals.
Crinus
QBasic :-P.

Though all dynamic allocation happens by resizing predefined arrays.

kristoff_it
Go is already getting away with a magic generic map type, they could have done it the same way, it they wanted to.
kristoff_it
I replied to the parent comment on my experience, but I do agree that Java and C# can be very fast.

I also agree on the problem with null pointers with is really ridiculous. Another complaint that I have about Go is how for some reason Google decided to implement many web protocols in the standard library, but never really cared to get websockets right, to the point that they just sedn you to a third-party library from their own documentation. That + QUIC/HTTP3 make me want to take my tinfoil hat out from the drawer.

https://godoc.org/golang.org/x/net/websocket

NULL is in much more widespread use then Python. C and C++ have implementations where NULL == NULL. https://www.infoq.com/presentations/Null-References-The-Bill...
I thought based on the title that this was about null—https://www.infoq.com/presentations/Null-References-The-Bill...
Apr 25, 2019 · coldtea on Flutter desktop shells
A null is not a string or an integer or a MyClass instance, etc. So why does it pollute variables of all those types?

https://medium.com/@hinchman_amanda/null-pointer-references-...

https://www.infoq.com/presentations/Null-References-The-Bill...

https://www.quora.com/Why-was-the-Null-Pointer-Exception-in-...

https://www.lucidchart.com/techblog/2015/08/31/the-worst-mis...

Kiro
Thanks. Stupid question but consider the following:

  entity = Entity.fetch(id)
  if (!entity) doSomething()
Entity.fetch returns null if it doesn't find anything. How would this work without null?
wtetzner
The real problem isn't null itself, it's that in most languages that have it, null inhabits all types (or at least all reference types).

Different languages solve this problem in different ways. Many languages get rid of null entirely, and use option types in its place. Other languages, like Kotlin, fix it in the type system, by differentiating between e.g. String and nullable String (spelled String? in Kotlin) .

coldtea
There are several ways. A popular one is an Optional type. Here's how Java does it: https://www.baeldung.com/java-optional
ptx
You would still have null, but only when asked for and checked explicitly.

In Kotlin, for example, you mark something with a question mark to say that it can be null, which forces you to check for null before using it:

  val entityOrPossiblyNull: Entity? = Entity.fetch(id)

  if (entityOrPossiblyNull == null) {
      doSomething()
  }
  else {
      // The compiler knows that the variable is not null in this branch,
      // so this assignment is OK.
      val entityForSure: Entity = entityOrPossiblyNull

      doSomethingWithEntity(entityForSure)
  }
NPEs literally The Billion Dollar Mistake, and you don't think they are a big deal?

https://www.infoq.com/presentations/Null-References-The-Bill...

discreteevent
In my experience null pointers and unassigned variables were a big problem in C an C++. But in java? They were never something that really caused a major issue. Most of them turn up in tests and are really easy to fix because you get an exception stack. Very few turn up in production and often it is because something else failed at runtime to cause it. If the null pointer wasn't there you would have had to deal with that other failure anyway.
Many of those are ruled out as modern successors (in my mind, at least), when they continue to make “the billion dollar mistake” (to use its inventor’s own words[1]) of null references.

Rust, Zig, Kotlin, Swift, and many other modern languages can express the same concept of a null reference, but in a fundamentally superior way. In modern languages like these, the compiler will statically guarantee the impossibility of null dereference exceptions, without negatively impacting performance or code style!

But it goes beyond just static checking. It makes coding easier, too: You will never have to wonder whether a function returning a reference might return null on a common failure, vs throw an exception. You’ll never have to wonder if an object reference parameter is optional or not, because this will be explicit in the data type accepter/returned. You’ll never have to wonder if this variable of type T in fact contains a valid T value, or actually is just “null”, because the possible range of values will be encoded in the type system: If it could be null, you’ll know it and so will the compiler. Not only is this better for safety (the compiler won’t let you do the wrong thing), it’s self-documenting.

It blows my mind that any modern language design would willingly think nullable object references is still a good idea (or perhaps its out of ignorance), when there are truly zero-cost solutions to this — in both runtime performance and ease of writing code, as you can see for example from Zig or Kotlin.

[1] https://www.infoq.com/presentations/Null-References-The-Bill...

gameswithgo
> the compiler will statically guarantee the impossibility of null dereference exceptions,

almost every language that gets rid of nulls with something like the Option type will let you still bypass it and get a null reference exception. Rust lets you unwrap, F# lets you bypass it. You could at least enforce a lint that doesn't allow the bypasses in projects where that is desired though.

imtringued
Perfect is the enemy of good. By reducing the possibility of null dereference exceptions from 100% to 10% you have reduced the cognitive burden by 90%. Removing the bypass would result in a 100% reduction in cognitive burden, only 10% more than the second best solution. However handling null cases correctly isn't free either. Especially when you know that a value cannot be "null" under certain conditions which those 10% fall under. In those cases handling the error "correctly" is actually an additional cognitive burden that can ruin the meager 10% gain you have obtained by choosing the perfect solution.
electrograv
> However handling null cases correctly isn't free either. Especially when you know that a value cannot be "null" under certain conditions which those 10% fall under.

While I agree there are rare cases where .unwrap() is the right thing to do, I actually disagree here that it’s anywhere close to 10%: If you want to write a function that accepts only non-null values in Rust, you simply write it as such! In fact, this is the default, and no cognitive burden is necessary: non-nullable T is written simply as “T”. If you have an Option<T> and want to convert it into a T in Rust, you simply use “if let” or “match” control flow statements.

I actually think using .unwrap() in Rust anywhere but in test code or top-level error handling is almost always a mistake, with perhaps 0.001% of exceptions to this rule. I write code that never uses it, except those cases mentioned; while I’ve run into situations where I felt at first .unwrap() was appropriate, I took a step back to think of the bigger picture and so far always find safer solutions to yield a better overall design.

The cognitive burden from Rust comes not from this, but almost entirely from the borrow checker (a completely different toptic), and in some cases, arguably inferior “ergonomics” vs how Zig or Kotlin handle optionals.

For example, in some null-safe languages, you can write:

  if (myObject) { myObject.mehod(); }
And the compiler will understand this is safe. Whereas, in Rust, you must write:

  if let Some(x) = myObject { x.method(); }
This is not even to mention that Rust has no built-in shorthand for Option<T> (some languages write “T?” for example), but I understand why they chose not to build this into the language; rather, Option<T> in Rust is actually a component of the stranded library! In a way, that’s actually quite cool and certainly is by-design; however, it doesn’t change the fact that it’s slightly more verbose.

IMO it’s not a huge deal, but certainly Rust could benefit from some syntax sugar here at least. Either way, both examples here are safe and statically checked by the compiler.

majewsky
> but certainly Rust could benefit from some syntax sugar here at least

It's a tough balance. Rust could benefit from more sugaring, but on the other hand, Rust already has quite a lot of syntax at this point.

gameswithgo
Yeah I think unwrap is best used when experimenting/prototyping, but it can be very very useful there. Imagine trying to get started using Vulkan or Opengl without it. Big mess. But in production code you might want to lint it as a strong warning or error.
electrograv
Yes, but there’s a big difference between the default member access operator crashing conditionally based on null-ness — vs — the same operator guaranteeing deterministic success (thanks to static type checks), with the option to circumvent those safe defaults if the programmer really wants to (in which case they usually must be very explicit about using this discouraged, unsafe behavior).

It may seem to be just semantics, but it’s really quite important that the default (and most concise) way in these languages to read optional values is to check if they’re null/None first in an if statement, after which you can call “object.method()” all you like. It’s important that you can’t just forget this check; it’s essential to using the content of the optional, unless you explicitly type something like “.unwrap()” — in which case there’s almost no chance the programmer won’t know and think about the possibility a crash. Take this in contrast to the chance of crash literally every time you type “->” or “.” in C++, for example.

otabdeveloper2
> Many of those are ruled out as modern successors (in my mind, at least), when they continue to make “the billion dollar mistake” (to use its inventor’s own words[1]) of null references.

Well, you're in luck then! You don't even need a 'modern' successor, C++ (even the ancient versions) disallow null references.

Mesopropithecus
What you mean is that C++ doesn't have a way to (easily) let you check whether a given reference is null or not. int* a = NULL; int& b = *a; compiles and runs just fine.
ethan_g
No the gp is correct, references in c++ can't be null. Your code invoked undefined behavior before you did anything with a reference, namely *a which is a null pointer dereference.
mdpopescu
The "null problem" is that a static language does a run-time check instead of a compile-time check. By the time the undefined behavior is invoked, compilation ended.
coldtea
>Your code invoked undefined behavior before you did anything with a reference

Since nobody stopped you, the problem is still there.

the_why_of_y
> namely *a which is a null pointer dereference.

Which is a textbook example of the null reference problem.

Edit: There may be some terminological confusion here: when programming language folks talk about "references", they include in that definition what C/C++ call "pointers". See for example the Wikipedia article, which gives as the C++ example not C++ references, but C++ pointers.

https://en.wikipedia.org/wiki/Reference_(computer_science)

masklinn
> compiles and runs just fine.

For fairly low values of those. Creating a null reference is UB, your program is not legal at all.

jessaustin
Sure, we're not supposed to do that. Sometimes it happens anyway, and the C++ compiler isn't much help in that case.
coldtea
If the compiler still accepts it, then that it belongs to the "UB" class of code is not much comfort.

The whole point is to NOT have it be accepted.

masklinn
> Well, you're in luck then! You don't even need a 'modern' successor, C++ (even the ancient versions) disallow null references.

That's useful, until you realise that all its smart pointers are semantically nullable (they can all be empty with the same result as a null raw pointer) and then nothing's actually fixed.

Jach
Null isn't that bad -- or rather, the concept of a missing value. Certain languages handle null better than others, but even then, it seems like the more costly mistake has been the accumulation of made-up data to satisfy non-null requirements.[0] More costly for non-programmers who have to deal with the programmers' lazy insistence that not knowing a value for some data in their system is forbidden, anyway.

In any case I think the modern fashion of trying to eliminate null from PLs won't matter much in the effort to replace C, whereas something like a mandatory GC is an instant no-go (though Java at least was very successful at sparing the world a lot of C++). OTOH a language that makes more kinds of formal verification possible (beyond just type theory proofs) might one day replace C and have null-analysis as a subfeature too.

[0] http://john.freml.in/billion-dollar-mistake

gameswithgo
It doesn't seem like you are familiar with how option types get rid of null. You don't have to make up data to satisfy things not being null. You set them none, and the language either forces, or encourages usually, you to always check if the option is none or some.
Jach
I use Option in Java quite a bit because I'm real sick of NPEs and cascading null checks in all-or-nothing flows. I would have preferred Java starting with something like Kotlin's approach where T is T not T|nil. You and the sibling might be missing the point of the post I linked, I think. It can be convenient to have formal assistance via e.g. the type checker that a function taking a non-null String returns a non-null Person with a non-null FirstName and LastName. But in the zeal to be rid of null to make programmers' lives a bit easier, when faced with a name that doesn't compose into 2 parts, someone has to decide what to do about that and who needs to care down the line. You can make up data ("FNU" as in the blog), set a convention (empty string), throw an exception, or declare either the whole Person structure Optional or at least certain fields. If you use a dynamic late-binding language you may have other options. Whatever you do, it ought to be consistent or robustly handled where the references interact with your DB, your processing programs, and your data displays. Finally when these references escape your system, as lots of real world data does, they necessarily escape any static criteria you once had on them, thus it's important to consider those third party systems have to live with your choice. Null is a convenient choice, not something to be villified so casually.
electrograv
I think the author of that blog post fundamentally misunderstands the point: The damage of nullable pointers is not that they are nullable, but that compilers allow you to write code everywhere that assumes they’re not null (in fact, this is the only possible way to code, when the language cannot express the notion of a non-nullable reference!)

For example, most older languages with “the billion dollar mistake” have no complaint whatsoever when your write “object.method();” where it’s unknown at this scope whether “object” is null or not.

The fact that such code compiles is the billion dollar mistake; not the fact that the pointer is nullable.

I don’t care if you want to write nullable references everywhere, or whatever else you prefer or your application demands. That’s fine, so long as:

1. Non-nullable reference types must exist.

2. Nullable references types must exist as statically distinct from #1.

3. The compiler must not let you write code that assumes a nullable reference is not null, unless you check via a control flow statement first.

Now to take a step back, the principle behind this certainly applies beyond just nullability (if that was the point you were trying to make): Generally, dynamic, untyped invalidation states are dangerous/bad, while statically typed invalidation states are ideal. And yes, this does include bad states internal to a non-null reference, just as much as to a null reference.

Sum types are the key to being able to statically declare what range of values a function may return (or accept), and ensure at compile time that these different cases are all accounted for. If you aren’t aware of how elegantly sum types solve this, you should look into it — and I suspect it will be quickly clear why nullable references are useless, outdated, and harmful.

But at the very least, we’ve solved the pain of null dereference — and virtually without compromise. So, it’s irresponsible or ignorant IMO to create a new language that doesn’t include this solution in its core.

I agree that null/nil/None is one of the worst programming constructs ever invented [0] and deserves more attention than if statements.

[0] https://www.infoq.com/presentations/Null-References-The-Bill...

The Tony Hoare quote is about null _references_ which is about aliases to computer memory. (E.g. the dreaded NPE or null pointer exception.) He's not talking about _logical_ nulls such as "tri-state booleans" or "missing data".

I'm struggling to understand what you mean here. My understanding is that there is no distinction between "logical null" and a "null reference" in terms of the problem we're discussing--as soon as you introduce nulls as a placeholder for values that are not yet initialized you have to deal with the logical implications of null being a member of those types, no? It's been a while since I watched the talk, but scanning the transcript from the talk we're discussing (https://www.infoq.com/presentations/Null-References-The-Bill...) the way Prof. Hoare talks about them seems to be the same in terms of their logical impact on the type system. I quote:

25:55 One of the things you want is to be able to know in a high level language is that when it is created, all of its data structure is initialised. In this case, a null reference can be used to indicate that the data is missing or not known at this time. In fact, it's the only thing that can be assigned if you have a pointer to a particular type.

...

27:40 This led me to suggest that the null value is a member of every type, and a null check is required on every use of that reference variable, and it may be perhaps a billion dollar mistake.

And another thing you wrote which I don't understand:

We need nulls in programming languages because they are useful to model real-world (lack of) information. All those other clunky techniques will reinvent the same kinds of "null" programming errors!

Isn't that exactly backwards? The entire point Prof. Hoare was trying to make is that using nulls to model the "real-world (lack of) information" causes us to have to contend with the logical problems inherent in making null a member of every type. Additionally, sum types seem to model this in a logically consistent way very well, as other commenters have noted.

Jan 14, 2018 · 1 points, 0 comments · submitted by B4TMAN
Dec 15, 2017 · abraae on Three-Valued Logic
Tony Hoare, who introduced null in Algol in 1965, calls it his $1b mistake https://www.infoq.com/presentations/Null-References-The-Bill....
zkomp
(As the other comments already said:) null references are not the same as null values.

A database sometimes needs to model this case: the value is unknown: maybe the paper file you digitized was corrupted or destroyed, or other valid reasons this value is not known.

Whereas null references can be avoided, like in rust etc.

lmm
You sometimes need to model absence of a reference as well - that's why rust etc. have option/maybe types. Making every reference in the language possibly null was a terrible design mistake for algol, and making every value in the language possibly null was a terrible design mistake for SQL.
zkomp
I disagree. It is not at all a design mistake in sql. You can have or forbid null values in tables. And you need that for many things. It is properly designed.

Rust or functional languages show you do not generally need null references, you have options instead. It is completely different.

Null in C or C++ is much more a very costly misstake. Missing data is just reality.

Data may be incomplete, the values unknown. SQL is designed for that. It is designed to handle reality. Whereas null references are not, they cause problems you do not need...

Data != reference

you can avoid the issue of allowing invalid references everywhere

you can not avoid incomplete data.

ghusbands
Algol's null and SQL's null are unrelated. In SQL it allows many operations to pass through an out of scope value without exceptions, like the Maybe monad. In Algol, it serves as a broken reference causing exceptions on any use.
MarkusWinand
Yes, because it is a null reference.

SQL's null is a value. The difference is that processing null values does not cause exceptions in SQL.

techno_modus
> SQL's null is a value.

It is known to be one of the major controversies because originally NULL means the absence of a value which entails that it is not a value. Hence, if there are no values, then we cannot do any operations. Yet, for whatever reason (avoiding exceptions etc.) expressions with NULL need to be evaluated and hence NULL is treated as a value. So we get a problem: NULL is not a value AND NULL is a value. There are different views on this problem and the solution implemented in SQL (and three-valued logic) is probably not the best one.

MarkusWinand
This is a controversy outside of SQL.

SQL is pretty clear what null is:

> “Every [SQL] data type includes a special value, called the null value,”[0] “that is used to indicate the absence of any data value”[1]

(http://modern-sql.com/concept/null)

[0] SQL:2011-1: §4.4.2 [1] SQL:2011-1: §3.1.1.12

rixed
It doesn't cause early terminations but it certainly does cause errors.
MarkusWinand
*Edit: exceptions :)
hjjiehebebe
The problem with nulls in c++ etc. is that every reference type is forced to be nullable. There is no opt out of that semantics. You get passed in an object of type O and it is really a sum type of O + Nothing. It means you need to do runtime checks everywhere and reason across call boundaries. You need to look at he code inside that method! That so un-OO!

In Db land you can make a column not nullable. Bit the SQL languages like PLSQL TSQL etc have the same issue.

Sharlin
C++ references are non-nullable by design. They are sane in that regard. Pointers are nullable, of course, but at least they are syntactically conspicuous and anyway pretty rare in modern C++.
heavenlyblue
I would add: the reason it is a issue in C++ is because the algebra around pointers doesn’t support the NULLs explicitly:

  - the results of any operations over nulls run as if the pointer were not null
  - while anyhing you do in SQL with NULL would explicitly have a mapping to either a correct value or another NULL
This is all said knowing that it’s quite easy to understand why is that the case.
lmm
Which is worse. Null references are at least relatively fail-fast. SQL null propagates and so you get the error a long way away from the original source, like with Javascript's "foo is not a property of undefined".
abiox
I'd guess that propagation is by design, such as for use in outer joins.
default-kramer
Exactly. I've wondered why no SQL implementation (that I know of) has optional assertions like "join exactly 1 some_table" or "select assert-not-null(some_column)". I don't see any reason why this would be a performance killer; in fact, it might even be possible to prove and then cache with the query plan.
Jan 28, 2017 · coldtea on Java Without If
>Why is Optional<> better then throwing an exception or returning a null?

https://www.infoq.com/presentations/Null-References-The-Bill...

Tony Hoare, null's creator, regrets its invention:

“I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.”

https://en.wikipedia.org/wiki/Tony_Hoare#Apologies_and_retra...

https://www.infoq.com/presentations/Null-References-The-Bill...

00098345
I have to disagree with Hoare sensei there. The number is nowhere near a billion. I personally know of one case that totaled to a billion.
koolkao
Would love to learn more about this single billion dollar mistake
00098345
SQL summation without coalescing null addenda to zero. The sum result was a null. This had been going on for years. Oops!

The lesson/mitigation was to add a NOT NULL attribute on top of a DEFAULT 0.

brobinson
Sounds like this could have also been mitigated by either the SQL server warning about SUM() operating over a nullable column or a "where foo is not null" clause. Your solution is best, though.
int_19h
Hoare was talking about null references specifically. Or, to be even more precise, making references nullable by default, and allowing all operations on them in the type system, with U.B. or runtime errors if they actually happen.

NULL in SQL is a very different and largely unrelated concept. Though probably a mistake just as bad - not the concept itself even, but the name. Why did they ever think that "NULL" was a smart name to give to an unknown value? For developers, by the time SQL was a thing, "null" was already a fairly established way to describe null pointers/references. For non-developers, whom SQL presumably targeted, "null" just means zero, which is emphatically not the same thing as "unknown". They really should have called it "UNKNOWN" - then its semantics would make sense, and people wouldn't use it to denote a value that is missing (but known).

brobinson
Including "null" is one of the most unfortunate things about Go. I'm glad to see that other modern languages (Swift, Rust, and so on) are avoiding it.
valleyer
Swift most definitely has a null: nil. The difference is that you can (and it is the default to) declare variables of type non-nil object (non-optional), unlike Objective-C.
chris_7
nil is just shorthand for Optional.None, which is Just Another Value. (sort of, really you can make nil translate to any type, but please don't)
valleyer
Fair but I don't think the original claim was intended that pedantically! :)
masklinn
The point of the original claim was that modern language (like Swift or Rust) tend not to make null part of every (reference) type. That Swift has a shorthand for Optional.None doesn't change that, nil isn't a valid value for e.g. a String (as it would be in Java or Go), only for String? (which is a shorthand for Optional<String>)
chris_7
String isn't a reference type in Swift, but yes.
esrauch
What you are describing is no different than modern Java where variables are marked @Nullable and its a compiler or linter error to dereference it without being inside a null-check conditional. If you don't use this in Java it is just the same as having String? as your type everywhere.
masklinn
> What you are describing is no different than modern Java

It is in fact quite different. String/String? is not an optional annotation requiring the use of a linter and integration within an entire ecosystem which mostly doesn't use or care for it, it's a core part of the language's type system.

> If you don't use this in Java it is just the same as having String? as your type everywhere.

Except using String? everywhere is less convenient than using String everywhere, whereas not using @Nullable and a linter is much easier than doing so.,

eatbitseveryday
Well, while Rust does have pointer types[1], it doesn't allow them to be used in typical code (i.e. dereferenced), except within "unsafe" blocks. A null pointer does exist[2]. I believe this is needed for such things as interoperability with C code.

[1] https://doc.rust-lang.org/std/primitive.pointer.html

[2] https://doc.rust-lang.org/std/ptr/fn.null.html

brobinson
The only use case I've had for these personally is having a global config "constant" which can be reloaded behind a mutex. unsafe + Box::new() + mem::transmute() on a static mutable pointer. I believe I copied this from Servo based on a suggestion on IRC.

IIRC this was pre-1.0 Rust, so there's probably a better way to do it now.

ben_jones
I've recently played with using nil pointer references as a form of default overloading. The process in the function block was something like:

1) assign default value to variable

2) check if pointer parameter is nil

3) if pointer is not nil assign pointer value to variable

4) do work with variable

Am I overlooking a simpler way to do this? I feel like lib.LibraryFuncion(nil, nil) is nicer then lib.LibraryFunction(lib.LibraryDefaultOne, lib.LibraryDefaultTwo), though admittedly the explicitness of the second option is appealing.

brobinson
Could you just create well-named functions which call LibraryFunction() internally rather than relying on the caller to specify 1 or more default arguments?
ben_jones
Yeah. It's funny I'm currently working on two Go libraries one uses the convention you listed [1] and the other [2] uses nil pointers.

[1]: https://github.com/b3ntly/go-sms/blob/master/client.go

[2]: https://github.com/b3ntly/mLock/blob/master/lock.go

Disclaimer: Both libraries are in the pre-alpha stages and are extremely light on tests and/or broken.

ChoHag
Hiding it in the implementation detail won't get rid of it.

While you can do pointer arithmatic there will be null pointers and while memory is addressible there will be pointer arithmatic.

And while I'm a programmer, I want access to all the functionality the CPU offers.

twanvl
> While you can do pointer arithmatic there will be null pointers

There is no reason why the concept of a null pointer has to exist. If there were no special null pointers it would be perfectly okay for the operating system to actually allocate memory at 0x00000000. With unrestricted pointer arithmetic you can of course make invalid pointers, but a reasonable restriction is to only allow `pointer+integer -> pointer` and `pointer-pointer -> integer`. You can't make a null pointer with just those.

mkawia
It's was meant to be there , It's a feature ,Going for the option/maybe types would have been too much according to it's designers.
brobinson
I hear the "too complicated" argument a lot from defenders of null, but it doesn't quite make sense to me.

Good code is generally agreed upon to always check whether function inputs or the results of function calls are null (if they are nullable). Why not make it a compile-time error if you don't check rather than hoping that the fallible programmer is vigilant about checking and delaying potential crashes until runtime?

Go is extremely pedantic about a number of things like unused library imports, trailing commas, etc. which have absolutely no bearing on the actual compiled code, but it intentionally leaves things like this up to programmers who have shown that they can't be trusted to deal with it properly.

Having to manually deal with null is much more complicated than having an Option/Optional type in my opinion. We've also seen that it's far less safe.

majewsky
> Go is extremely pedantic about a number of things like unused library imports, trailing commas, etc. which have absolutely no bearing on the actual compiled code

Trailing commas, agreed. But unused library imports have the (probably unintended) side effect that their `func init()` will execute. Which is also why there is an idiomatic way to import a module without giving it an identifier, just to have this side effect.

brobinson
Good point. And there can actually be more than one init() function per package!
majewsky
Yes, but I guess that they're just concatenated at compile-time.
biztos
I too find it annoying in Go, though I'm not sure what the default value of a reference in a struct would be otherwise.

However, I do see the value of NULL in a database context even though it makes database interfaces harder -- especially in Go, where the standard marshaling paradigm means anything NULLable has to be a reference and thus have a nil-check every time it's used.

The conceptual match is so awkward that when I write anything database-related for Go, if I have the option then at the same time I make everything in the database NOT NULL; even though that screws with the database.

Ah, NULL. When I think about the pain it causes, balanced against its utility, I sometimes wish I'd never heard of it.

And I'm sure learned people said the same thing about Zero, once upon a time.

simcop2387
> I too find it annoying in Go, though I'm not sure what the default value of a reference in a struct would be otherwise.

The way it works in Rust is that you can't have a reference without the thing you're referring to. There isn't a default value and because of that it's an error to try to have a reference without something to refer to.

The way Rust works with this is to have a type called Option (I think this is a monad in haskell?) that lets you say this can be None or Some, so you have to explicitly handle the None case whenever you use it (either by panicing, or matching or some other method).

gbacon
You’re thinking of Maybe[1]:

  data Maybe a = Just a | Nothing
      deriving (Eq, Ord)
Yes, it is a monad[2].

  return :: a -> Maybe a
  return x  = Just x

  (>>=)  :: Maybe a -> (a -> Maybe b) -> Maybe b
  (>>=) m g = case m of
                 Nothing -> Nothing
                 Just x  -> g x
[1]: https://wiki.haskell.org/Maybe

[2]: https://en.wikibooks.org/wiki/Haskell/Understanding_monads/M...

IanCal
For anyone who has the same reaction as I did when first hearing of this, "But isn't that just like a nullable value?" (don't worry, this post will not contain any monad analogies)

Yes. Ish. In languages that have a null/nil/None then almost anything you have passed to any function could be null. Every function you write could have a null passed into it and you either have to do checks on everything or trust that other people will never pass those values in.

That's pretty OK until something unexpectedly receives one and hands it off to another function that doesn't deal with it gracefully.

In Haskell (and I assume others) this can only happen to Maybe types, they're the only ones that can have the value of Nothing. So the compiler knows which things can and cannot be Nothing, and therefore can throw an error on compilation if you are trying to pass a Maybe String (something that can be either a "Just String" that you can get a String from or a Nothing) into a function which only understands how to process a String.

This feels like it might be restrictive, but there are generic functions you can use to take functions which only understand how to process a String and turn it into one that handles a Maybe.

It's quite a nice system, even though I get flashbacks to java complaining I haven't dealt with all the exceptions.

Haskell doesn't completely stop you from shooting yourself in the foot though, you can still ask for an item from an empty list and break things. There are interesting approaches to problems like that however: http://goto.ucsd.edu/~rjhala/liquid/haskell/blog/about/

Finally, it's worth pointing out that Maybe and Nothing and Just and all that aren't built into the language, they're defined using the code that gbacon wrote. So in a way, Haskell doesn't have a Maybe type, people have written types that work really well for this kind of problem and everyone uses it because it's so useful.

[disclaimer: I've probably written 'function', 'type', 'value' and other terms with quite specific meanings in a very general way. Apologies if this hurts the understanding, and I would appreciate corrections if that's the case, but just assume I'm not being too precise if things don't make sense]

simcop2387
> Finally, it's worth pointing out that Maybe and Nothing and Just and all that aren't built into the language, they're defined using the code that gbacon wrote. So in a way, Haskell doesn't have a Maybe type, people have written types that work really well for this kind of problem and everyone uses it because it's so useful.

Yep, and I believe it's the same with with Rust, it's been put into the standard library but it's not some special thing that only the compiler can make.

And like you said, the whole idea is that the normal case is that you can't have null values. If you need them for something you declare that need explicitly and have to handle it explicitly or it's a compile error. That way it can be statically checked that you've handled things.

merijnv
Option (and/or option) is the usual name of Maybe in ML style languages like SML, ocaml, etc.
aweinstock
It's a monad in Rust too (with Some being return, and and_then[1] being >>=). Rust's generics just aren't yet[2] flexible enough to abstract over monads within the language.

[1] https://doc.rust-lang.org/std/option/enum.Option.html#method... [2] https://github.com/rust-lang/rfcs/issues/324

TylerE
What's wrong with using an Option/Maybe type? Can't Go do that?
brobinson
Go effectively has Option types for database query results: https://golang.org/pkg/database/sql/#NullBool
unscaled
This is not a generic option type, but rather a tri-state bool, or an Option<bool>. Go has no user-defined generics, so you can't have a bool type. It does have built-in "magical" generics, namely arrays, slices maps and channels, but no option/maybes. Language-level option types are not unheard of (C#, Swift and Kotlin all have a notion of this sort, although they all support proper user-defined generics as well).
masklinn
> Language-level option types are not unheard of (C#, Swift and Kotlin all have a notion of this sort)

Swift's Optional is a library type: https://github.com/apple/swift/blob/master/stdlib/public/cor...

Though the compiler is aware of it and it does have language-level support (nil, !, ?, if let)

cyphar
You could (just use an interface). However, it would be a pain to use because Go doesn't have generics (if a type X implements the Maybe interface, it is not true that []X can be casted to []Maybe). So you would have to always make temporary slices to copy all of your Xs into.
unscaled
Go's wholesale embrace of null is kinda jarring at this day and age, since it doesn't even have safe null dereferencing operators like Groovy, C#, Kotlin et. al. It's like Java all over again.

Considering Rob Pike is a huge fan of Tony Hoare, and the inevitable mountain of pain caused by null references, that's kinda surprising.

But I guess "Worse is Better" in the sense of "simplicity of implementation is valued over all else" is still the guiding principle of Go. As Tony Hoare himself said: "But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement". It seems like we're doomed to repeating this mistake again and again.

May 29, 2016 · 48 points, 79 comments · submitted by quickfox
meddlepal
The problem isn't null itself, but that many languages use nullable types by default.

Null is a damn useful construct and while there may be better solutions to using a null reference they're often not as clear as using null or worth the time investment to develop properly.

So yea, I generally try to avoid null's and completely understand how they've caused a lot of problems, but I'm not willing to say we shouldn't have them or use them when appropriate.

bogomipz
On a related note and at the risk of sounding obtuse here - if you didn't have "null" wouldn't you just have to replace it with something else?
kazagistar
You don't just not use nulls, you replace them with something like Optionals. With a bit of language support, they can compile down to nulls but provide a layer of type safety and convenience functions.
legulere
This is e.g. what rust does. Another advantage of this is that you can also use them with types that aren't pointers (e.g. chars, integers, enums).
yxlx
I would like to read about this. Got any links?
legulere
Option<T> is just a normal enum in Rust. Enums can also can have parameters in Rust. Those are usually called tagged union in non-rust-speak. It's either Some(x) or None. It has a pretty easy definition: https://doc.rust-lang.org/std/option/enum.Option.html

Storing None for an Option<&T> as a null and Some(ref) just as the reference is an special optimisation that the rust compiler does. Usually you have a selector which tells you which variant of the enum it is.

0xfaded

  Tree *a = new Leaf();
  Tree *b = new Leaf();
  Tree *c = new Node(a, b);
  a.parent = b.parent = c;
null has other uses, e.g. as above for cyclically dependent initialisation.
kazagistar
First, to some extent, your code is more a demonstration of the weakness of imperative vs declarative code, since, for example, Haskell lets you declare recursive structures directly, since it doesn't enforce an order of execution where none is needed.

    data Tree = Leaf Tree | Node Tree Tree
    a = Leaf c
    b = Leaf c
    c = Node a b
But more importantly, there is no reason you can't do this with Optionals.

    Tree *a = new Leaf();
    Tree *b = new Leaf();
    Tree *c = new Node(a, b);
    a.parent = b.parent = new Some(c);
kmiroslav
It's perfectly fine to use nulls in languages that encode them in their type system, e.g. Kotlin.
pjmlp
Eiffel was one of the very first ones to have that feature actually, around 2006 when the language was revised for the ECMA standard.

https://www.eiffel.org/doc/eiffelstudio/Differences%20betwee...

junke
Exactly. The problem is not that there is a null, it is that it is a valid value for all types (e.g. you can pass null to a function that expects a String).

Allowing nulls explicitly, like with a "String?" type, is the way to go in languages where Option is not an option.

kqr
Isn't it, in practise, basically the same thing though? The theoretical difference is this:

1. Option is a regular data type/object that represents "a collection of at most one value". It's similar to a list, set, heap, tree or whatever other collection types you have, except it can not contain more than one value.

2. With explicit nulls I guess any data type (e.g. Integer) will automatically get a clone of itself (a different type named Integer?) where also null is a valid member.

From a purely theoretical standpoint, I like #1 better.

SatvikBeri
I think Option[String] and "String?" are semantically both a union type–they both effectively add one possible value to the underlying type, along with a constructor for naturally extending functions on that type–Kotlin's "?" operator is equivalent to Scala's map or Haskell's fmap, etc.
airless_bar

  Option[Option[String]] != String??
junke
Option[Option[String]]? Is this a Church Numeral?
SatvikBeri
My impression, though I'm not sure, is that Kotlin simply won't allow you to do a double "??". Scala etc. will technically allow Option[Option[Something]], but in practice you would almost never want to use it, and can easily avoid it with flatMap.
airless_bar
The whole point of this was to show that Scala's types preserve the structure of the computation.

It might not be very interesting in the Option[Option[String]] case but imagine Try[Either[String, Int]] or List[Future[Double]].

It's a very important distinction.

Collapsing cases is one of the primary thing why exceptions sometimes get a bad rap, and Kotlin (and Ceylon) do the same with ? (and |, &) at the value level.

kmiroslav
The downside of `Option` is that it's a wrapper, and as such, you need to `flatMap` (or similar) whenever you want to access the wrapped value.

By encoding `null` in its type system, Kotlin lets you manipulate these values directly which leads to code that is much less noisy and just as safe.

airless_bar
Not really.

The main strength of the first approach is that Option is only one type out of many error-handling structures.

Not every error is handled appropriately by Option/?.

If you have a language like Kotlin where they hard-coded one way of handling errors, it feels very unidiomatic to pick a better fitting error handling type, while in languages where errors are handled by library code, it's a very natural approach.

kmiroslav
> Not every error is handled appropriately by Option/?.

Which is expected since these two constructs are not aimed at handling errors: they manage missing values.

> If you have a language like Kotlin where they hard-coded one way of handling errors

No, no. `?` is not for handling errors.

Kotlin is as agnostic as Scala for managing errors: you are free to use exceptions, dumb return values or smarter ones (`Either`, `\/`, `Try`, ...).

airless_bar
Yeah, it's just that-if you look at every language ever designed-if the language ships with a built-in construct developers will use and abuse it on every occasion, and every other approach lingers in obscurity.

> Which is expected since these two constructs are not aimed at handling errors: they manage missing values.

Which is a very small part of handling errors in general. As Kotlin offers special syntax for only this case, developers tend to shoehorn many errors into the "missing-value" design to get the "nice" syntax even if a different approach would have been more appropriate.

> Kotlin is as agnostic as Scala for managing errors: you are free to use exceptions, dumb return values or smarter ones (`Either`, `\/`, `Try`, ...).

That's not true in practice:

Just have a look at funktionale: Despite providing almost the same as Scala's error handling types (partially due to the blatant copyright violations) almost nobody uses it. This is a direct result from having a "first-class" construct in the language: It turns library-based designs into second-class citizens.

That's the thing Scala got right, and many of the copy-cat languages got wrong.

kmiroslav
> Which is a very small part of handling errors in general

Missing values are not errors.

If you look up a key on a map and that key is not present, it's not an error.

> partially due to the blatant copyright violations

Uh copyright what? On an API?!?

hota_mazi
He's probably an Oracle employee.
airless_bar
> Missing values are not errors.

Call it whatever you want. ? only covers a small subset of interesting "conditions" while tremendously hurting "conditions" which could be handled in a better way.

> Uh copyright what? On an API?!?

Implementation. The copying of slightly buggy exceptions strings makes it even more obvious that files were copied verbatim with just enough syntax changes to turn Scala code into Kotlin code while replacing the original license and authors with different ones.

PS: Feel free to comment on actual the points I made.

kmiroslav
> PS: Feel free to comment on actual the points I made.

Sure.

I think the idea that API's (or implementations as you said) can be copyrighted is completely insane and I can't believe any software engineer would be okay with it. Which makes me think you're not a software engineer, and that's okay, but please read up on the issues, this is super important for our profession.

I can't belive the US made that a law and it makes me sure that I will never want to move there.

airless_bar
> I think the idea that API's (or implementations as you said) can be copyrighted is completely insane and I can't believe any software engineer would be okay with it. Which makes me think you're not a software engineer, and that's okay, but please read up on the issues, this is super important for our profession.

I think you are super confused here. This is not about APIs. Copyright is what allows software developers to enforce a license of their choice. Without copyright, the license is just a text file without meaning. I suggest you read up on the FSF's position on this if you want to have an example.

> Sure.

(Still waiting for you to comment on the points I have made.)

greydius
In practice, it's not the same thing.

Here's a Java example where I want the type system to enforce that a method in `ClassB` can only be called from `ClassA`. However, the fact that `null` circumvents the type system makes this pattern just wishful thinking.

    class ClassA {
        private static final Witness witness = new Witness();

        final static class Witness {
            private Witness() {}
        }

        void callClassBMethod(ClassB classB) {
            classB.onlyClassACanCallThisMethod(witness);
        }
    }

    class ClassB {
        void onlyClassACanCallThisMethod(ClassA.Witness witness) {
            // ...
        }
    }
randiantech
Just to add my two cents to the discussion, Typescript is implementing Non nullable types [1]. I think i'd be convenient that any language being used for large projects would support this feature.

[1] https://github.com/Microsoft/TypeScript/pull/7140

kaeluka
Some functional language use non-nullable types by default. Haskell, ML, OCaml, f.ex.

Java has the checker framework that supports `@Nullable` annotations and will allow only those vars/fields to contain `null`.

skoczymroczny
Why are option types that force you to check the value for null good, but checked exceptions are frowned upon?
m12k
A couple reasons off the top of my head: It's usually easier to read and reason about code that has fewer possible control flows. Exceptions are an alternative control flow outside of the normal one, and you have to deal with them at any level of a call stack that can trigger them somewhere further down, maybe wrap things in try/catch/finally to make sure that this alternative control flow does not break the normal cleanup (resource release, etc.) that your code does. Or in the case of checked exceptions, at the very least add 'throws' clauses, which unfortunately in some cases leads to a leaking of implementation detail - e.g. it's not uncommon to have low level code that throws exceptions (e.g. IOException if the hard drive blows up) and high level code that is where you actually deal with it (show the user a popup to tell them that their hard drive blew up) - a checked exception forces any intermediate layer of code to also deal with the fact of this exception, even if all they do is to add a throws statement - refactoring the low level in a way that adds or removes a checked exception now involves trudging through every layer of the call chain to deal with this, even if there's only really one layer at the top that actually cares. Making the exception unchecked means this isn't forced, but the alternative control flow might still mess things up. An algebraic data type like a Maybe or Option (or other 'nillable'), unlike an exception, follows the normal control flow, is explicit about the fact that the value might be missing (similar to when an exception is checked), but it's up to each layer of intermediate code to decide if that should be considered exceptional or not (like an unchecked exception) - if all they do is pass the value on, then they can be written exactly the same.

TL;DR: Unlike a checked exception, an Option type doesn't break the normal control flow (easier to reason about, not as prone to not cleaning things up) and allows each layer of code in a call chain to do as much or as little error-handling as it deems appropriate.

int_19h
Because the type systems of most languages that have checked exceptions cannot properly accommodate them when higher-order functions are utilized (i.e. "I throw everything that f throws except E, and also everything that g throws except T").
danielbarla
The difference is that currently, there's no way to express "this (reference type) variable really cannot be null", which makes it painful. It's the equivalent of having "throws Exception" on every method, declared for you implicitly.
Sharlin
Currently in some (well, unfortunately many) languages. C++ references have always been non-nullable, and in Scala the only way you should get nulls is from Java libraries.
adrianratnapala
C++ references can easily be null (T& foo = *ptr_i_thought_was_good;).

But they provide a useful social convention: if it is a pointer then it is your job to check for NULL. If it is a reference, then it is the job of the other guy to ensure it is not null. And the rules of the language mean that null references (as opposed to pointers) are rare.

shultays
Not without an UB
ArkyBeagle
If you never dereference a null pointer, then there will be no UB of that type.
shultays
I guess it counts as dereferencing. Here is what standard says

"A reference shall be initialized to refer to a valid object or function. Note: in particular, a null reference cannot exist in a well-defined program, because the only way to create such a reference would be to bind it to the “object” obtained by dereferencing a null pointer, which causes undefined behavior"

adrianratnapala
So in other words my comment should have said that: in C++ it is easy to create something that is either a null-reference or some other, much more unpredictable, undefined thing.

Well then! That just goes to show how much safer C++ references are than I had previously supposed.

asQuirreL
Parent means something closer to pointers (in C++) when they say "reference type". In other words, if a type contains null, then we must always check locally that values of that type are not null before it is safe to use them (or manually maintain contracts).

With option types, we check whether it is empty once, and if it is not, we unwrap it and use it as if it were never nullable. Any function that uses the unwrapped value downstream is oblivious to the fact.

dlubarov
> It's the equivalent of having "throws Exception" on every method, declared for you implicitly.

You imply that this behavior is unreasonable, but that's the approach Scala takes (all exceptions are unchecked), and a lot of users seem to like it. So I think the parent's question still stands.

danielbarla
That's an interesting point, and a fair question.

I didn't actually want to imply that lacking checked exceptions unreasonable. I was merely trying to paraphrase isthe null problem in terms of checked exceptions, which the person I was replying to seems to favour. It was just an attempt to create an "aha" moment, showing what kind of pain is created by allowing null references.

A proper answer would probably have a lot more substance to it, but in short I get the feeling that null-free programming is a lot easier to achieve in general than exception-free programming. In fact, in most cases, not allowing nulls seems to be the implied default, yet it's hard (or impossible) to declare this explicitly (in most languages, like Java, etc). It just happens to be an extremely common problem that can be solved in a nice way, unlike exception checking in the general.

TL;DR: The added safety that is given by allowing the type system to reason about nulls - when looked at in the light of the amount of boilerplate code that it creates - compares favourably to checked exceptions.

airless_bar
They preserve the structure of the computation.
kqr
The arguments I've heard against checked exceptions are that they're hard to compose.

If you write the `send_mail` procedure you might only want it to throw `MailException`s of various kinds. But if you use a TCP procedure inside, you'll have to declare `send_mail` to also throw `ConnectionError`s, thus revealing its implementation. To correctly hide/abstract over the implementation, you have to internally catch any `ConnectionError`s thrown by the TCP procedure, and re-throw them as `MailException`s. That's a lot of manual work that shouldn't be needed.

Another similar problem is what someone else mentioned: if you don't actually know which specific exceptions your method can throw until runtime, what do you declare?

----

Note that this is also the case for some "null alternatives". Sure, the `Option`/`Maybe` type is easy to compose, but as soon as you start inserting error information in there (with an `Either` type) you run into some of the same problems. This is acknowledged by language communities where that is practised, and some of them prefer unchecked exceptions to `Either`-style types for that reason.

aninhumer
In your example, unless your send_mail procedure is generic over transport in some way, surely its use of TCP is an inherent part of its behaviour, not an implementation detail?

You're going to have to handle them as TCP errors anyway, even if they're lifted to a different type, so why not just throw them as they are?

ArkyBeagle
Unless there's a full discipline built into the system for addressing TCP errors that your "mail" thingy can rely on, you'll need to address it.

And how can you not know the specific exceptions you need to handle? Don't you need to test all of those?

I am sure you didn't mean it to, but this reminds me of meetings where people say "oh, we don't have to worry about that. TCP is reliable."

Twitch :)

SuddsMcDuff
I posted a very closely related article about this last year (http://mattdavey.logdown.com/posts/259807-avoiding-the-billi...) - it's C# focused but the idea should carry over to similar languages. It summarizes 5 strategies for dealing with nulls, rather than just jumping straight into monads.
None
None
ppcsf
I think the section dealing with the Optional monad sort of glosses over their composability, which is one of the primary reasons we use monads. Multiple functions that all return an optional can be composed; for instance, if we had

  Optional<WidgetFriend> GetWidgetFriend(Widget widget)
and WidgetFriend had an Optional<FriendName> property, we could simply write [1]

  from widget in _widgetRepository.FindById(widgetId)
  from friend in _widgetRepository.GetWidgetFriend(widget)
  from name in friend.Name
  select name;
and get an Optional<FriendName>, without having to repeat all the tedious pattern matching of Some or None for every call. If any of those calls returns a None, the entire evaluation short-circuits and we get a None result at the end.

Of course, we can still write

  Optional<Widget> widget = null;
which is the problem with not fixing this issue on a language level.

[1] You'd have to implement Select and SelectMany, instead of Map and Bind, to get query syntax like this.

achr2
I think you'd like C#'s linq syntax and null coalesce operator. While obviously they don't get away from null, they go a long way to making this type of syntax and null safety much easier to implement.
None
None
igravious
So that's the famous Tony Hoare. Nice trip down memory lane there.

Who was the guy at the end from the Erlang community? (The one who drew a chart with the axes useful/useless versus unsafe/safe that he claimed he got from Simon Peyton-Jones?)

bogomipz
I have a question about null being able to subvert types, using Java an example, if I write foo.upperCase() and foo is a string, the type check succeeds but will produce an NPE at runtime. Why doesn't javac catch that foo is not set i.e is "null." Why is null above type checks?
zaphar
Because whether foo's value is null is for most cases only detectable at runtime precisely because java doesn't have the capability to express disjoint type sets in it's type system the way that say Haskell or F# do. This inability means that the programmer can't tell the compiler what code paths produce null vs what code paths produce string. As a result the compiler has to assume that all codepaths could produce both.

There are some trivial cases where the compiler could figure it out for you but most of those cases are not very useful in practice.

bogomipz
Interesting, is there a formal name for this kind of type system that has the ability to look at the code paths and express disjoint type sets?

Is this a trend in newer languages in general or an FP specific thing?

airless_bar
> formal name

non-shitty

> a trend in newer languages

yes, only discovered a few decades ago

:-D

bogomipz
I understand making a snarky comment in addition to some meaningful commentary but really two useless comments? Why even take the time to do so?
airless_bar
Sorry, was meant to be a light-hearted response.
zaphar
Typically it's referred to as an Algebraic Data Type https://en.wikipedia.org/wiki/Algebraic_data_type. Any Type System that can express this type will be able to do the above.

It's not necessarily new since the ML family of languages has had it for a while now. It's also not limited to FP languages but it best known from ML the ML family. Rust has it though as well as Swift i think.

bogomipz
I see. Thank you for the follow up. This is really helpful. Haskell implements this data type which is why it has come up in this discussion. I need to spend some time with something from the ML family.
chvid
Nulls are not a mistake. They are a trade-off as so many other things in language design.
virtualized
Benefits of nullable by default: -

Drawbacks: numerous

How is this a trade-off?

chvid
Null as a default is useful / a big simplification when it comes to reflection/meta-programming.

Consider this piece of pseudo-Java/Spring code and think how you would do it if the platform you were using forced you to either declare a and b as optional (which they are not) or assign them some value between the IoC container instantiates the class X and wires the values for a and b.

   public class X {
       @Autowired
       ServiceA a;

       @Autowired
       ServiceB b;

       @PostConstruct
       void init() {
           // a and b are instantiated by the IoC container and
           // are fully valid for the rest of the execution
       }
   }
ppcsf
Constructor injection?
chvid
Yes. But then you have to define an explicit constructor. Also X maybe auto wired into ServiceA or B making it impossible for the IoC container to create the intended object graph without splitting construction and injection into two passes.
yazaddaruvala
1. Lombok @RequiredArgsConstructor?

2. Do you not consider cyclic dependence a code smell? At least I have always avoided cyclic dependencies.

chvid
I personally consider the example I gave as being a nicer way of doing dependency injection than the more verbose constructor based injection.

But in general yes; cyclic dependencies are a bad idea.

None
None
josephcooney
If they guy who invented them calls them a mistake then maybe they are, or were at least to him.
chvid
I was talking about them in a general context; for a particular language / a particular setting of course they may be a mistake.
adwn
> Nulls are not a mistake. They are a trade-off as so many other things in language design.

A bad trade-off – i.e., one where the downsides outweigh the upsides – is a mistake.

Nulls are not a necessity: Several languages demonstrate that there are better alternatives.

elchief
You're gonna need nulls until databases stop supporting nulls. And everyone would need to use 6th normal form for that to happen, and nobody wants that
bubaflub
Could you explain why you believe the presence of a NULL in the database necessitates a NULL pointer somewhere in the code?
andrewguenther
I believe the author's intent is if you read a null value out of a database, how would you represent it in code?
None
None
bubaflub
I think it depends on where the NULL is and what it represents.

If NULL value is in a field that is used for JOIN-ing, I don't think you would "represent" the NULL value as much as you would simply have a lack of data. For example, if you had some set of results from a query that contained a JOIN on a field and some values were NULL, those records with the NULL value would not be in the result set.

If we do receive results with possibly NULL values I believe they could be either be represented with an appropriate zero value -- e.g. "" or 0 -- or with an optional type like another commenter suggested.

bbcbasic
.NET uses DBNull.Value, which is not null.

Also 6th normal form?? What is that?

lgas
You store each letter in the names of the columns of your tables in a different data center.
icebraining
https://en.wikipedia.org/wiki/Sixth_normal_form

Not something I'd like to use.

achr2
I work with an engineering application that uses a 5th/6th normal form database that is implemented in standard SQL server/Oracle DBs. It is awful and painful. A simple query with maybe five properties and 2 relationships takes upwards of 20 joins. Yet the database only has 4 tables...
IncRnd
Null references to memory are what is being discussed, which is completely different from variant/option types. Changing databases has nothing to do with NULL references to memory.
danielnixon
Not true. Nullable values in a database can be dealt with (and I would argue _should_ be dealt with) using a maybe/option[1] type. This is exactly what Slick[2] does, for example.

[1] https://en.wikipedia.org/wiki/Option_type [2] http://slick.lightbend.com/

Perceptes
One of my absolute favorite things about Rust is that nulls can't be used to circumvent the type system. They must be explicitly accounted for via an option type in all cases. It's very hard to go back to a language that doesn't do this after getting used to it. I get so frustrated by constant null dereference errors when that entire category of error can be avoided.
ArkyBeagle
I'd rather have explicit seperation of concern for UB-causing behavior than have a generic monad approach.

Perhaps the examples I've seen in Rust just don't do that, because part of the appeal of Rust is that you sort of don't have to.

That makes the cognitive load acceptable and documents all the UB prevention in source code. It's a habit arrived at from using formal instantiation protocols that were driven from external to the device being made - starting with RFC1695.

May 22, 2016 · Decade on Blue. No, Yellow
The inventor of the null pointer considers them to be a mistake:

https://www.infoq.com/presentations/Null-References-The-Bill...

twblalock
So what? He's wrong.
meddlepal
That quote gets pulled out of thin air every time this argument comes up and it's pretty silly. I think more specifically, nullable by default might have been the bigger mistake not that null is inherently a mistake.
twblalock
The fact that most of these replies mention esoteric langiages that have no traction in industry kind of proves my point.
junke
The mistake is Null being a member of all (reference) types. This is not the case in all languages that have a nil value.

In Common Lisp if you declare an argument to be a string, it is an error to pass NIL. You have to declare the type to be "(or string null)". That's exactly what the Option type provides.

twblalock
That is not how Lisp works. You are just wrong.
junke
> That is not how Lisp works. You are just wrong.

Please tell me how it works, then. I guess you could easily find a counter-example.

First things first. Optional arguments are handled by &OPTIONAL and &KEY parameters, which can be used to provide defaults values as well as a flag indicating whether the argument is provided. The most used default value is probably NIL. NIL belongs to the SYMBOL, NULL, LIST and BOOLEAN types and their supertypes (ATOM, SEQUENCE, T). If you really want, you can include it in a custom type too. For most uses, the type (OR NULL U) is regarded generally as an optional U. This is consistent with the definition of generalized boolean and lists (a list is an optional cons-cell). If some corner cases, you have to use another sentinel value or use the third binding in &OPTIONAL and &KEY argument bindings. This is hardly a problem if you have a type U such as (TYPEP NIL U), but for most useful cases, (TYPEP NIL U) in fact returns NIL.

Here below I define a function FOO which accepts a string and prints it:

    (defun foo (x) (print x))
The type declaration for FOO is:

    (declaim (ftype (function (string) t) foo))
In Java, such a function will works with null too, because null is an acceptable value for String. In Common Lisp, you have to write this declaration to allow NIL:

    (declaim (ftype (function (or null string) t) bar))
And just to test how it behaves, let's use IGNORE-ERRORS to catch errors and return them as secondary values. Calling the first FOO with NIL signals an error:

    (ignore-errors (foo nil))
    NIL
    #<TYPE-ERROR expected-type: STRING datum: NIL>
    ;; Does not print anything else
The same test case with the second FOO shows that it accepts NIL:

    (ignore-errors (foo nil))
    NIL
    ;; prints "NIL"
Lisp being Lisp, those checks are likely to be done dynamically, but in some cases, your compiler can determine if a variable will hold NIL and warn you about a conflicting usage. However, how type checks are enforced does not change the argument, namely that NIL is not an appropriate value for all types.
twblalock
Here is my counter-example. It is the console output of Clisp 2.49:

Break 1 [2]> (declaim (ftype (function (string) t) foo))

NIL

Break 1 [2]> (defun foo (x) (print x))

FOO

Break 1 [2]> (foo "hello")

"hello"

"hello"

Break 1 [2]> (foo nil)

NIL

NIL

Break 1 [2]>

There were no errors. The function accepted nil and printed it. You must have some other type restrictions going on than what you mentioned. The output is the same whether I put the declaim statement before or after defun foo.

junke
The behavior is undefined if a value does not match its type declaration, and implementations are free to ignore some declarations, like CLISP. So for a portable approach just add a CHECK-TYPE:

    (defun foo (s)
      (check-type s string)
      (print s))
The main point is that (TYPEP NIL 'STRING) is NIL.
None
None
May 20, 2016 · 4 points, 0 comments · submitted by 0xmohit
It means that every single type in the language has one extra value it may contain, 'nil', and your code will crash or behave erratically if it contains this value and you haven't written code to handle it. This has caused billions of dollars in software errors (null dereferences in C/C++, NullPointerExceptions in Java, etc.). See "Null References: The Billion Dollar Mistake" by Tony Hoare, the guy who invented it:

http://www.infoq.com/presentations/Null-References-The-Billi...

A better solution is an explicit optional type, like Maybe in Haskell, Option in Rust, or Optional in Swift. Modern Java code also tends to use the NullObject pattern a lot, combined with @NonNull attributes.

sythe2o0
nil in Go doesn't work that way. Most types cannot be nil.
nostrademons
Interesting, because (reading up on this) value types can not be nil.

How often does typical Go code use values vs. interfaces or pointers? It seems like the situation is pretty similar to modern C++, which also does not allow null for value or reference types (only pointers) and encourages value-based programming. Nil is still a problem there, but less of one than in, say, Java, where everything is a reference.

sythe2o0
In my own experience, nil basically only shows up when I've failed to initialize something (like forgetting to loop over and make each channel in an array of channels), or when returning a nil error to indicate a function succeeded. I've never run into other interfaces being nil, but I also haven't worked with reflection and have relatively little Go experience (~6 months).

The code that I've written regularly uses interfaces and pointers, but I'd guess 80% works directly with values.

lobster_johnson
But a bunch types you do expect to work can: Slices, maps and channels.

  var m map[string]bool
  m["foo"] = 1  // Nil, panic

  var a []string
  a[0] = "x"  // Nil, panic

  var c chan int
  <-c  // Blocks forever
This violates the principle of least surprise. Go has a nicely defined concept of "zero value" (for example, ints are 0 and strings are empty) until you get to these.

The most surprising nil wart, however, is this ugly monster:

    package main

    import "log"

    type Foo interface {
    	Bar()
    }
    type Baz struct{}

    func (b Baz) Bar() {}

    func main() {
    	var a *Baz = nil
    	var b Foo = a
    	fmt.Print(b == nil)  // Prints false!
    }
This happens is because interfaces are indirections. They are implemented as a pointer to a struct containing a type and a pointer to the real value. The interface value can be nil, but so can the internal pointer. They are different things.

I think supporting nils today is unforgivable, but the last one is just mind-boggling. There's no excuse.

sythe2o0
I don't think using nil to represent uninitialized data is a major issue-- if it were possible to catch uninitialized but queried variables at compile-time, that could be an improvement, but we want to give the programmer control to declare and initialize variables separately.

I agree the second case is a little silly.

aninhumer
It's perfectly possible to separate declaration and initialisation without using a null value.
JanKanis
I don't think you're right that interfaces are implemented as a pointer to a struct. The struct is inline like any other struct, and it contains a pointer to a type and a pointer to the value, like `([*Baz], nil)` in your example. The problem is that a nil interface in Go is compiled to `(nil, nil)` which is different.

That still makes this inexcusable of course.

lobster_johnson
You're right, it was lurking in the back of my mind that it must be on the stack entirely.
sacado2
Beside the fact you're wrong (structs, arrays, bools, numeric values, strings and functions can't be nil, for instance), I'm always a little puzzled when I read the argument that "nil costs billions of $".

First, most of the expensive bugs in C/C++ programs are caused by undefined behaviors, making your program run innocently (or not, it's just a question of luck) when you dereference NULL or try to access a freed object or the nth+1 element of an array. "Crashing" and "running erratically" are far from being the same. If those bugs were caught up-front (just like Java or Go do), the cost would be much less. The Morris worm wouldn't have existed with bound-checking, for instance.

Second point, since we're about bound checking. Why is nil such an abomination but trying to access the first element of an empty list is not? Why does Haskell let me write `head []` (and fail at runtime) ? How is that different from a nil dereference exception ? People never complain about this, although in practice I'm pretty sure off-by-one errors are much more frequent than nil derefs (well, at least, in my code, they are).

tome
> I'm always a little puzzled when I read the argument that "nil costs billions of $".

$1bn over the history of computing is about $2k per hour. I would not be astonished if a class of bugs cost that much across the industry.

> most of the expensive bugs in C/C++ programs are caused by undefined behaviors

Sure, there are worse bugs. Why, then, waste our time tracking down trivial ones?

> Why does Haskell let me write `head []` (and fail at runtime) ?

Because the Prelude is poorly designed.

> How is that different from a nil dereference exception ?

It's not different, really. It's a very bad idea.

> People never complain about this

Yes we do. We complain about it all the time. It is, however, mitigateable by a library[1] (at least partially), whereas nil is not.

[1] http://haddock.stackage.org/lts-5.4/safe-0.3.9/Safe.html#v:h...

sacado2
> $1bn over the history of computing is about $2k per hour. I would not be astonished if a class of bugs cost that much across the industry.

It's not about knowing whether it's $1bn, or 10bn, or just a few millions. The question is to know whether fighting so hard to make these bugs (the "caught at runtime" version, not the "undefined consequences" version) impossible is worth the cost or not.

Can you guarantee that hiring a team of experienced Haskell developers (or pick any strongly-typed language of your choice) will cost me less than hiring a team of experienced Go developers (all costs included, i.e from development and maintenance cost to loss of business after a catastrophic bug)? Can you even give me an exemple of a business that lost tons of money because of some kind of NullPointerException ?

tome
Why do you think it costs very much to provent null pointer exceptions?
sacado2
I think it has consequences on the design of the language, making it more complex and more prone to "clever" code, i.e code harder to understand when you haven't written it yourself (or you wrote it a rather long time ago). I've experienced it myself, I spent much more time in my life trying to understand complex code (complex in the way it is written) than to correct trivial NPEs.

That being aside, it is less easy to find developers proficient in a more complex language, and it is more expensive to hire a good developer and let him time to teach himself that language.

I'm not sure it costs "very much", though. I might be wrong. But that's the point: nobody knows for sure. I just think we all lack evidence about those points, although PL theory says avoiding NULL is better, there have been no studies to actually prove it in the "real-world" context. Start-ups using Haskell/OCaml/F#/Rust and the like don't seem to have an undisputable competitive advantage over the ones using "nullable" languages, for instance, or else the latter would simply not exist.

aninhumer
>fighting so hard to make these bugs ... impossible is worth the cost or not.

In this case the solution is trivial, just don't include null when you design the language. It's so easy in fact, that the only reason I can imagine Go has null, is because its designers weren't aware of the problem.

sacado2
Not including null has consequences, you can't just keep your language as it is, remove null and say you're done.

What's the default value for a pointer in the absence of null? You can force the developer to assign a value to each and every pointer at the moment they are declared, rather than rely on a default value (and the same thing for every composite type containing a pointer), but then you must include some sort of ternary operator when initialization depends on some condition, but then you cannot be sure your ternary operator won't be abused, etc.

You can also go the Haskell way, and have a `None` value but force the user to be in a branch where you know for sure your pointer is not null/None before dereferencing it (via pattern matching or not). But then again you end up with a very different language, which will not necessarily be a better fit to the problem you are trying to solve (fast compile times, easy to make new programmers productive, etc.).

zzzcpan
But null pointers are not really a problem in Go, aren't they? The problem only exists in lower level languages.

Go, as a language, is a pretty good one. It's Go's standard library that's not, especially "net" and other packages involving I/O.

This article reminds me of Sir Charles Antony Richard Hoare (aka Tony Hoare)'s InfoQ presentation [1]. What the article described are part of that $1 billion damages.

For people who don't know Tony, he's is a British computer scientist, probably best known for the development in 1960, at age 26, of Quicksort. He also developed Hoare logic, the formal language Communicating Sequential Processes (CSP), and inspired the Occam programming language. [1][2]

[1] http://www.infoq.com/presentations/Null-References-The-Billi...

[2] https://en.wikipedia.org/wiki/Tony_Hoare

As for null in general, you can hear it from the horse's mouth here: http://www.infoq.com/presentations/Null-References-The-Billi...

There are a number of ways to approach this topic, so I'll just give you one: in languages with more advanced static type systems, you try to encode as much semantic information as possible in the type. As you've said, the idea of null can be useful, so it deserves a place in the type system. You want to separate things that may be null from things that should never be null. This is because not-null is by far the common case. Allowing everything to be nullable by default optimizes for the lesser-used semantic, which is where errors with null come in: you assume that something isn't null, when it actually is.

filwit
I agree the concept of 'non-nil' vars is very useful (and we have that in Nim), but I'm not entirely convinced by the rest of that argument. Namely, I don't agree that nil is rare enough to justify the verbosity Rust uses for it. Non-nil vars may be seen more often, but that doesn't mean nil vars aren't also often used, either. In Nim, both nil and non-nil vars are at roughly the same reach.. while in Rust non-nil vars are significantly easier work with. You may see that as a positive argument for Rust's safety (and you may be right for some domains), but I see it as more a negative argument for Rust's practicality.
steveklabnik
Fair enough. My argument here is more general than Rust itself, it's relevant to all languages with an Option type and no null. The verbosity can of course vary by language.
beagle3
What's the rust name/syntax for non-nil vars?
steveklabnik

    let foo = 5; // cannot be null
    let bar = Some(5); // technically also can't be null, but could be None
                       // instead of Some(val)
`foo` has the type `i32` here, and `bar` has the type `Option<i32>`.
Dewie3
> I agree the concept of 'non-nil' vars is very useful (

non-zebra numbers[1] are also very useful. But why have a "non-zebra number" when I can just have plain numbers?

[1] A "number" which is either a number, or a zebra

filwit
...because zebra's are not a universally useful modelling tool to programmers like references are. Thus, the absence of a reference, ie nil, also becomes a useful, commonly used modeling tool. If we all wrote software using african wildlife metaphor, 'non-zebra' might then be just as useful.
Dewie3
> ...because zebra's are not a universally useful modelling tool to programmers like references are.

References are not the zebras. Nil-references are.

We might as well say that the underlying number that the reference are represented by are useful modelling tools, in some circumstances. That doesn't mean that you want pointer arithmetic for references all the time.

seanwilson
I think that 1) the vast majority of variables don't need to be nullable and 2) nullable variables are a source of common runtime errors, is a solid argument that not nullable by default is a good idea.

OCaml and Haskell don't even have the concept of null. Mutable state should be discouraged in general as it makes code harder to reason about and more buggy.

pcwalton
> Namely, I don't agree that nil is rare enough to justify the verbosity Rust uses for it. Non-nil vars may be seen more often, but that doesn't mean nil vars aren't also often used, either.

It's not verbose. "Option" is 6 characters. ".map" is 4.

> In Nim, both nil and non-nil vars are at roughly the same reach.. while in Rust non-nil vars are significantly easier work with.

Option values are really easy to work with. Just use map or unwrap if you don't care about handling the null case. If you do care about handling it (which you should, after all!) the code using "if let" is the same as the equivalent "if foo == null".

filwit
> It's not verbose. "Option" is 6 characters. ".map" is 4.

I just want to note that verbosity isn't just about symbol length, but also about operator noise and the number of available or required commands used to achieve a goal. Just counting these characters isn't very relevant, and isn't even the best Rust can do (as someone pointed out you can use 'Some()' to get an Option var, which is only 4 chars).

That said, I agree this is rather subjective, and can't be well compared outside the context of the rest of the language.

dragonwriter
> (as someone pointed out you can use 'Some()' to get an Option var, which is only 4 chars)

Some() is 6 chars.

filwit
I was measuring with pcwalton's ruler.
Apr 27, 2015 · 2 points, 0 comments · submitted by pykello
Aug 21, 2014 · 4 points, 0 comments · submitted by journeeman
Tony Hoare called null references his "billion dollar mistake": http://www.infoq.com/presentations/Null-References-The-Billi....
Null boogeyman? It's inventor doesn't seem to think so:

http://www.infoq.com/presentations/Null-References-The-Billi...

http://www.infoq.com/presentations/Null-References-The-Billi...

I often see code such as:

    public Object getObject() {
      return this.object;
    }
I favour lazy initialization combined with self-encapsulation:

    private Object getObject() {
      Object o = this.object;

      if( o == null ) {
        o = createObject();
        this.object = o;
      }

      return o;
    }
This is thread-safe and allows injecting new behaviour via polymorphism (overriding the "create" methods in a subclass), which adheres to the Open-Closed Principle. It also eliminates the possibility of accidentally dereferencing nulls.

It could even be implemented as a language feature:

    public class C {
      nullsafe String name;

      public void greeting() {
        System.out.printf( "Hello, %s\n", getName() );
      }
    }
Where the "nullsafe" keyword automatically generates a private accessor and corresponding protected creation method.
sitkack
Bah! I know the JVM is now double-check-locking safe but http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLo...

Just put a `synchronized` on it and call it good. The JVM is so damn fast, _nearly_ all the stuff in the Haggar book (Practical Java) is not applicable anymore.

My biggest beef with the Go typesystem is that they didn't get rid of nil. Tony Hoare (the guy who invented null) has acknowledged they were his "billion dollar mistake" [1], common practice in Java and Haskell is moving away from them, and yet Go included them anyway - in a language that's supposed to be robust because you're supposed to handle every case. The Maybe (or @Nullable) type is a much better idea, because it flips the default from "any variable may be null" to "only variables explicitly declared as such may be null".

[1] http://www.infoq.com/presentations/Null-References-The-Billi...

frou_dh
It's exacerbated by nil being reused as a value for things that aren't even pointer types, like the slices, channels and especially interface "pairs"[1]

[1] (<NoType> . nil) vs (T . nil) is pretty weird: http://play.golang.org/p/fmk-72OYkO

shurcooL
There is no such thing as a Go variable with no type. They all have a type (could be interface{}) and a value (could be nil).

    package main
    
    import (
    	"fmt"
    
    	"github.com/davecgh/go-spew/spew"
    )
    
    func main() {
    	var x interface{}
    	var y chan (int)
    	spew.Dump(x)
    	spew.Dump(y)
    
    	var z interface{} = y
    	spew.Dump(z)
    
    	fmt.Println(x == z)
    	fmt.Println(y == z)
    }
    
    // Output:
    (interface {}) <nil>
    (chan int) <nil>
    (chan int) <nil>
    false
    true
frou_dh
Doesn't an interface value have both a static type and a dynamic type and in a case like x its dynamic type is "nothing"?

I didn't mean that this is necessarily bad, just that it seems weird for "nil" to be overloaded beyond pointers.

shurcooL
Yeah. From the Go spec:

"The static type (or just type) of a variable is the type defined by its declaration. Variables of interface type also have a distinct dynamic type, which is the actual type of the value stored in the variable at run time. The dynamic type may vary during execution but is always assignable to the static type of the interface variable. For non-interface types, the dynamic type is always the static type."

AYBABTME
In 97.5% of case, the `nil` value of Go types follows the NullObject pattern.

There's really just nil pointers that are still equivalent to `null`, and their absence would be an abnormality, as there's no reason why a pointer couldn't have a nil representation.

lmm
If one is to write robust programs it is useful to automatically enforce that there are no execution paths that set particular pointers to null.
skrebbel
Just because "NullObject pattern" ends with "pattern" doesn't mean that it's usually a good idea.
kkowalczyk
Hoare didn't invent 0-valued pointer. It was there since the beginning of time. Or at least the beginning of the CPUs.

People talk about getting rid of nil like it's actually possible.

It's not.

If you have pointers, they sometimes have to start the life uninitialized (i.e. with a value of 0) hence nil pointers.

As you admit yourself, the proposed solutions don't actually get rid of anything. At best they can force you to handle nil value by wrapping it in some wrapper.

Guess what - if you have that kind of discipline, you can do the same in C++.

Why aren't people doing that?

Because it comes at a cost. There's a cost to the wrapper and when it comes down to it, you still have to write the code to handle the nil pointer whether you're using a wrapper or a raw pointer.

It just doesn't buy you much.

Finally, fixing crashes caused by referencing a null pointer is downright trivial. Spend a few weeks chasing a memory corruption caused by multi-threading and you'll come to a conclusion that fixing a null pointer crashes (which can be done just by looking at the crashing callstack most of the time) is not such a big deal after all.

nostrademons
As others have pointed out, I'm not talking about nil as the zero-valued pointer at the hardware level. I'm talking about nil as the additional member of every type in the language's typesystem. A typesystem is an abstraction over the hardware, designed to catch programmer errors and facilitate communication among programmers. There's no reason it has to admit every possible value that the hardware can support.

And people are applying that sort of discipline - see the NullObject pattern in any OO language, or the @Nullable/@NotNull annotations in Java, or !Object types in Google Closure Compiler. The thing is, they have to apply it manually to every type, because the type system assumes the default is nullable. That makes it an inconvenience, which makes a number of programmers not bother.

I'll agree that null pointers are comparatively easy to track down compared to memory corruption caused by multi-threading. Threads are a broken programming model too, and if you want a sane life working with them you'll apply a higher-level abstraction like CSP, producer-consumer queues, SEDA, data-dependency graphs, or transactions on top of them.

dietrichepp
> Threads are a broken programming model too

More specifically, threads with shared mutable state. If state is never simultaneously shared and mutable then a host of problems disappear.

nostrademons
And even more specifically - threads with locks. Shared mutable state is okay as long as any state-swapping operations are atomic, eg. with STM or if you build up the state in one thread, switch it with an atomic pointer swap, and don't let other threads mutate the state.
pcwalton
Threads with locks are fine as long as the compiler enforces that you take the lock before mutating the data (e.g. Rust). :)
nostrademons
Annotalysis can do this for C++ as well. The problem is that you need to enforce an ordering on locks to avoid deadlocks. That works fine if they're all within one module. It doesn't work at all if you have to coordinate callbacks across threads from different third-party libraries.

The usual solution I've seen given for this is "Don't invoke callbacks when holding a lock." This is not a viable solution for most programs.

twic
> Threads are a broken programming model too

Oh come off it. Threads are in no sense 'broken' - compared to CSP or actors, they just give you a larger set of things you can write. Some of those things are bugs. Others are very useful. For example, a disruptor:

http://lmax-exchange.github.io/disruptor/

Nulls are broken because they let you write bugs, but don't let you write anything you couldn't write with options or whatever.

waps
Really ? How would you even encode Maybe or Option in Java, if you can't use null anywhere ?

The problem is that Maybe doesn't work without Algebraic Data Types.

anon4
Oh come on, that's trivial

  public interface Maybe<T> {
      boolean hasValue();
  }

  public final class Just<T> implements Maybe<T> {
      public final T value;
      public Just(T value) {
          this.value = value;
      }
      public boolean hasValue() { return true; }
  }

  public final class Nothing<T> implements Maybe<T> {
      public boolean hasValue() { return false; }
  }
You can even get rid of hasValue (but then you need to pay for instanceof each time); or of the Nothing class and make Maybe a class. You may ask what the value of Just.value is before the constructor runs - the value is a machine null and if you somehow manage to access it before the ctor runs, that's a NullPointerException; or what writing "Type varname;" in a function would do - that would be perfectly legal, but you won't be allowed to use it if the compiler can't prove you've initialised it first (which it does right now).
waps
Problems :

1) How do you get at the value itself ? (Casting ? That's bad)

2) How do you prevent in Maybe<Integer> x; x == null ?

3) How do you prevent someone from extending Maybe<T> ? e.g.

    public final class LetsHaveFun<T> implements Maybe<T> {
      public boolean hasValue() { throw Exception("Can't touch this");
    }
4) (you need a null check in the constructor)

5) (I dislike the autoboxing this uses)

anon4
1) Yes, casting. You need to do casting in Haskell too, it's just hidden for you by pattern-matching

2) It's for a hypothetical Java implementation that doesn't have null

3) If you really want to, make it an abstract class and do a check in the constructor that this.getClass() == Nothing.class or Just.class.

4) see 2)

5) well if you're using primitive types they are non-nullable already

Edit: to clarify, I don't expect someone to use it for Java today, it's what I would put in the standard library if Java didn't have a null in the first place.

twic
> 1) How do you get at the value itself ? (Casting ? That's bad)

You could add the usual map method to the Maybe type - a Maybe<T> can take a Function<T, U>, and returns a Maybe<U>. If it's None, it doesn't call the function, and just returns None; if it's Some, it calls it with the value, and wraps the result in a new Some. If you want to do side-effects conditionally on whether the value is there, you just do them in the Function and return some placeholder value. You could write a trivial adaptor to take an Effect<T> and convert to to a Function<T, Void>, etc.

> 2) How do you prevent in Maybe<Integer> x; x == null ?

You're right that using a Maybe does not exclude the ability to use nulls. Nobody can deny that. The point i was making is that there is nothing useful that you can do with nulls that you cannot do with a Maybe instead.

> 3) How do you prevent someone from extending Maybe<T> ? e.g.

You can trivially control extension by making Maybe an abstract class, giving it a private constructor, and making Some and None static inner classes of it. It's a kludge, but it works!

derekchiang
I'm surprised no one has mentioned Rust[1] yet. It's a systems programming language that has managed to get rid of null pointers by carefully controlling the ways in which a a value can be constructed[2].

[1] http://www.rust-lang.org/

[2] https://github.com/mozilla/rust/wiki/Doc-language-FAQ#how-do...

Jare
I don't think Rust is such a shining example of a solid language in this area, at least not yet. For example: http://stackoverflow.com/a/20704252/626867
azth
The SO issue you linked to has nothing to do with null pointers.
pcwalton
That has nothing to do with null pointers, and that was agreed to be changed today.
Dewie
And IIRC the compiler translates usages of None to null, so any complaint about inefficiency probably does not apply in this case, either.
dietrichepp
This is actually quite common in languages which support ADTs—you stick tags in the pointer to discriminate between subtypes of a union.
JoshTriplett
> If you have pointers, they sometimes have to start the life uninitialized (i.e. with a value of 0) hence nil pointers.

Right, and those pointers must then be declared as potentially null. Or more generally, potentially non-present values of any type need a type like Maybe T.

> As you admit yourself, the proposed solutions don't actually get rid of anything. At best they can force you to handle nil value by wrapping it in some wrapper.

That's exactly the point: you can tell from a glance at any type whether it can be null or not, and you can only look at the value of something that's guaranteed to not be null; otherwise, you have to handle the null case first. Forcing that to happen at compile time via the type system is far better than dereferencing a null pointer at runtime.

eru
> If you have pointers, they sometimes have to start the life uninitialized (i.e. with a value of 0) hence nil pointers.

Why do your variables have to start life uninitialized?

> Why aren't people doing that? Because it comes at a cost. [...]

Not a runtime cost, though. This can be handled purely by the typesystem at compile time.

Sharlin
> Guess what - if you have that kind of discipline, you can do the same in C++.

Well, ironically, C++ already has non-nullable pointers. They're called references and they work extremely well.

anon4

  int& toref(int* ptr) {
      return *ptr;
  }
  //snip
  int& i = toref(0);
  i = 5; // segfault
One thing I kind of appreciate at times about C++ (or at least about certain implementations) is the fact you can call methods on null pointers and you can check in the method if it's called on a null pointer. I.e.

  class A {
  public:
    int foo() {
        return this == null ? 1 : 2;
    }
  };

  //....
  A* a = 0;
  std::cout << a->foo();
Which allows you to make classes that work just fine even if you use a null pointer.
Sharlin
> int& toref(int* ptr)

Yes, it's theoretically possible. The point is, you need to explicitly do evil things to get "null references". The language cannot and is not meant to protect you from yourself.

> One thing I kind of appreciate at times about C++ (or at least about certain implementations)

Yes, certain implementations indeed. It is formally undefined behavior, so anything at all could happen if you do that; the most straightforward and efficient implementation just happens to "work".

anon4
> The point is, you need to explicitly do evil things to get "null references". The language cannot and is not meant to protect you from yourself.

No, you don't need to do evil things at all. Any C api that returns a pointer which you then need to pass to a C++ function that takes a reference is a possible source of failure if you forget to check your pointer. It can happen quite easily by mistake. The language cannot in any way guarantee that you won't make a reference from a null pointer. At least taking a reference as an argument does at least alert people that you're not expecting to be passed a null as an argument to your function.

Sharlin
A fair point. Still, only having to check for nulls on certain API boundaries is much less work than the alternative, and it's easy to train oneself to automatically spot "dangerous" points where a pointer-to-reference conversion is done.
jdmichal
> If you have pointers, they sometimes have to start the life uninitialized (i.e. with a value of 0) hence nil pointers.

Maybe at the machine level. But there's nothing that stops a programming language a few levels above machine from requiring that every pointer be initialized with a reference.

> As you admit yourself, the proposed solutions don't actually get rid of anything. At best they can force you to handle nil value by wrapping it in some wrapper.

The entire point is that a large majority of APIs don't WANT to accept or handle NIL, but have to, because the default is to allow it. And in some languages, such as Java, the only way to extend the type system is with reference types, making it impossible to ever not have to handle NIL. By reversing this decision, it becomes possible to specify both allowance and disallowance of NIL-valued parameters. Why you would ever argue against such expression is beyond me.

jaekwon
> Maybe at the machine level. But there's nothing that stops a programming language a few levels above machine from requiring that every pointer be initialized with a reference.

What if the pointer is to a large structure (expensive to initialize), and I want a function that returns a pointer, which may fail to initialize?

Without a nil, developers would create structures that have a "valid" field. Nil just makes that more convenient, and the way Go does it is pretty good -- you can't cast it the same way you can in C/C++.

lucian1900
With a Maybe/Option type, you are forced to always deal with the possibility of Nothing/None.

And even better, you can use monadic bind to chain together several actions on possibly-nullable things and get either a value or a null at the end.

waps
An argument can be made that this merely masks the problem : it is certainly possible for a Haskell function to crash (and therefore not return anything), no matter it's declared type :

reallyfun xs = reallyfun xs ++ [ 1 ]

Note that this will OOM, not just run infinitely long. There are plenty of ways you can cause this sorts of issues. So you can't trust Haskell functions to always return their declared type either.

The million-dollar-question : should your program be ready for this ? (in the case of a database : ideally, yes it should, and it's in fact possible to do just that)

That's the problem with abstractions, like the "always-correct-or-null" pointers of java : they're leaky. A type system, unless reduced to pointlessness, can't really be enforced fully. Haskell ignores failure modes, like memory and stack allocation, jumping to other parts of the program, reading the program, ... all of which can in fact fail.

Thinking about this gives one new appreciation for try ... except: (catching an unspecified exception). It's not necessarily worse than a pure function. Good luck defending that position to mathematicians though.

lobster_johnson
That misses the point, I think.

Of course functions can cause a program to crash, and there are all sorts of bad things that happen that cannot be caught at the language level; Haskell doesn't save you from memory corruption, for example.

But those things don't violate the language's guarantees about type correctness. A crashing program simply ceases to run; it's not like an Int-returning function can somehow crash and return a bogus Int value.

In this sense, Haskell is no different from languages like Java or Go. It's completely orthogonal to the null problem.

waps
This is the mathematician's argument. It boils down to the unfairness of having to execute programs on real hardware with real constraints. Well, that doesn't work in the real world obviously. Especially memory allocations WILL fail, so, frankly, deal with it. Haskell makes this impossible, and therefore throws real-world correctness out of the window because it makes mathematical correctness so much messier.

Your assertion that this can't be caught at the language level is wrong : checking malloc'ed pointers for NULLness will do it. In java, catch OOM exceptions. This error doesn't have to cause your programs to crash. Neither does an infinite loop ("easy" to catch in Java). Given more tools, you can write programs that are safe from some measure of memory corruption.

The real world is messy. Pretending it's not doesn't fix that, and nobody but mathematicians are surprised at all. End result is simple : your programs will crash after you've "proven" it can't crash. Running everything in "just-large-enough" VMs has massively exacerbated CPU and OOM error conditions, at least from where I'm sitting. I'd expect further cloud developments to make it worse.

So the type system only guarantees correctness if the following conditions hold, amongst other things:

1) infinite available memory (infinite in the sense that it is larger than any amount the program requests, haskell provides zero guarantees that limit memory usage, so ...)

2) infinite available time for program execution (again, for the normal definition of infinite)

3) no infinite loops anywhere in your program (more problematic in haskell because every haskell tutorial starts with "look, lazy programming means infinite loops terminate in special case X", and of course it only works in special cases)

Note that this is only the list of "hard" failures. There are factors that can blow up the minimum execution time of your program (e.g. VM thrashing, stupid disk access patterns) that I'm not even considering. In practice, these "soft" failures, if bad enough, cause failure of the program as well.

And only then do we get to the conditions that people keep claiming are the only conditions for haskell to work:

4) no hardware malfunctions

jdmichal
... And all this has what to do with having non-nullable references be the default, with a wrapping Option or Maybe type for otherwise?
waps
The point was that there is no real-word type system that can guarantee non-null pointers (not even Haskell's). You cannot do allocation reliably in the real world, and if your type system guarantees that, it is simply wrong.
jdmichal
You are arguing a useless point. Yes, in a real computer, memory can be randomly flipped by solar radiation and viola, your pointer is now actually NIL. Or any other various failure modes you've mentioned. The point being that those are failure modes, not normal operations. Once a system reaches a failure mode, nothing can be guaranteed, not even that it adds 1 and 1 correctly, because who's to say that the instructions you think you're writing are being written, read, and processed correctly? The only solution is to down the box, reset the hardware, and hope that whatever happened wasn't permanent damage.

My point being, you cannot invoke catastrophic system failure as an argument against a static-time type system and call it an argument, simply because that's an argument against any programming construct at all. Linked lists? But what if you can't allocate the next node... Better to not use them at all!

waps
You're trying to change the topic. I'm not talking about unreliable hardware. The two main things I contend WILL happen to production haskell programs are :

1) OOM (both kinds : stack and heap). 2) functions taking longer than the time they effectively have (ie. to prevent DOS conditions).

Both of these are guaranteed by the Haskell type system to never happen, and you will hit them in practice. (guaranteed may be the wrong word, maybe required would be better).

The C, or C++ type system does not guarantee allocation will succeed on the heap, and has deterministic results if it does fail, meaning you can react to that in a clean manner (and make sure it doesn't interfere with transactions, for example). With extra work you can guarantee the availability of stack space for critical methods too. Java guarantees both stack and heap allocation failures will result in deterministic progress through your program.

These are not serious enough to merit being called "catastrophic system failure". They are not. Don't tell me you haven't hit both these conditions in the last month.

That's all I'm saying.

lmm
If you want to be able to return "pointer or null", any decent language will let you do that, sure. But it's good to have the option of saying "this function will never return null".

Think of it this way: functions should document whether (and under what conditions) they return null, right? What if the compiler could check the accuracy of that documentation, so it would be an error to return null from a function that said it didn't return null, and a warning to document a function as possibly-returning-null when it never did? (And once you had that, surely you'd want a warning when you accessed a possibly-null thing without checking whether it was actually null?)

Tloewald
So basically you want the compiler to solve the halting problem for every function it compiles.
lmm
Ideally yes. But keeping track of nullability is not impossible, it's not even hard, as any number of existing languages with such checking prove (some of which have very performant compilers).
Oct 21, 2013 · taliesinb on The Reliability of Go
Unfortunately, by insisting that all data start in a zeroed state (as opposed to forbidding the use of uninitialized data, as Rust does), Go makes the same "Billion Dollar Mistake" that language designers have been making for decades:

http://www.infoq.com/presentations/Null-References-The-Billi...

mseepgood
And Dijkstra considered goto harmful. He wasn't right either. You can be sure that the Go designers were aware of Hoare's opinion.
taliesinb
Hoare only gave this talk in 2009, whereas Go development started in 2007. Did Hoare express this opinion publicly (or privately, to the Go developers) before then? I can't easily find any indication that he did.

I can imagine goto makes region analysis hellishly more difficult, and region analysis is what gives Rust the ability to statically reason about reference lifetimes, which makes it practical to remove null references from the language.

So on the Rust view, goto is even more antithetical to safe code than on Dijkstra!

azth
Indeed. Plus, I have never read a convincing response from the Go authors as to why they included null in the type system.
burntsushi
It's been hashed out over and over again on the mailing list. Their stance boils down to language design cohesion. In particular, avoiding null in your type system while preserving orthogonality in your language design would require a completely different language than the on that Go embodies. (This is in stark contrast with say, the Haskell community, which will add any new feature produced by original research.)

If you dig through the mailing list posts, avoiding null in the type system can be difficult to reconcile with "zero values", which each type has. More subtlety, avoiding null in your type system might require adding support for sum types, which conflicts in weird ways with Go's interfaces.

I think a lot of people tend to assume that null should always be eliminated from type systems because doing so can be done for free. But I disagree that it can be done for free, and my evidence is in the aforementioned mailing list posts.

The other reason why I suspect we'll never see the elimination of null in Go's type system is that it just isn't that big of a source of bugs. Anecdotally, I very rarely see my program crash because of a nil pointer error, even while doing active development. I could hypothesize as to why this is, but I'll just leave you with this thought: the type system isn't the only thing that can reduce the occurrence of certain classes of bugs.

pcwalton
I never found the conflicts with interfaces issue that Russ Cox raised to be convincing. Just require that the sum type be destructured before calling methods on it. This is what Rust and Scala, both of which have something akin to Go's interfaces, do.

The bigger issue is that having a zero value for every type does indeed conflict with not having null pointers. I feel that pervasive zero values box the language design in so heavily that the convenience they add doesn't pull the weight, but Go's designers evidently disagree. In any case, Go's language semantics are so based around zero values (e.g. indexing a nil map returns zero values instead of panicking for some reason) that they can't be removed without massive changes to the entire semantics.

burntsushi
From the Go FAQ:

    > We considered adding variant types to Go, but after discussion decided to 
    > leave them out because they overlap in confusing ways with interfaces. What 
    > would happen if the elements of a variant type were themselves interfaces?
It's not that there isn't a way to implement sum types in Go, it's that they aren't happy with how they interact with interface types. (That's how I interpret it, anyway.) I'm sure you and I can conceive of reasonable ways for them to coexist, but simply coexisting isn't enough for the Go devs.

> The bigger issue is that having a zero value for every type does indeed conflict with not having null pointers.

I agree that it is indeed the bigger issue. Personally, I make use of the "zero value" feature a lot and very infrequently run into null pointer errors while programming Go. So that particular trade off is clear for me. (I certainly miss sum types, but there is a bit of overlap between what one could accomplish with sum types and what one can accomplish with structural subtyping. Particularly if you're willing to cheat a little. This alleviates the absence of sum types to some extent. Enough for me to enjoy using Go, anyway.)

OT: I love the work you are doing on Rust. :-)

pcwalton
That FAQ entry and associated mailing list posts was what I was referring to as unconvincing. I don't see how sum types and interfaces are in conflict: just require destructuring before calling interface methods, as Scala does.
Dylan16807
He considered using goto instead of structured programming harmful. Which was right. People love to use that article like a strawman, ignoring the actual content and context.
pjmlp
> And Dijkstra considered goto harmful. He wasn't right either.

Yes he was, I don't even remember when I ever used goto outside Assembly programming.

munificent
> And Dijkstra considered goto harmful. He wasn't right either.

He was right. The kind of goto he specifically and clearly elucidates in his paper has utterly vanished from modern programming.

He's not talking about Torvalds using a little `goto` to jump to the error-handling at the end of a function. He's talking about unstructured programming: gotos that jump across procedure boundaries, or where "procedure boundary" isn't even a meaningful concept.

Sep 09, 2013 · 1 points, 0 comments · submitted by talles
Jul 17, 2013 · 1 points, 0 comments · submitted by tylermauthe
I am somewhat serious about this. Tony Hoare even called them his "Billion Dollar mistake" in a talk he gave a while ago (http://www.infoq.com/presentations/Null-References-The-Billi...)
barrybe
That talk gets quoted pretty often, but it's a bit silly for Hoare to take all the blame/credit. Null references were discovered, not invented. They are so easy (from the point of view of someone implementing a non-functional language) that they were inevitable.
Nils are a very bad code smell. They come from C's null, which is a billion dollar mistake[1], according to its creator: Tonny Hoare. Specially now that we have monads[2, 3].

Scala's standard library provides very helpful information on how to replace null with its Maybe class (called Option in Scala). Just take a peek into their collections library[4], and search for Option.

[1] http://www.infoq.com/presentations/Null-References-The-Billi...

[2] http://moonbase.rydia.net/mental/writings/programming/monads...

[3] http://andand.rubyforge.org/

[4] http://www.scala-lang.org/api/current/scala/collection/immut...

aphyr
Nils are a very bad code smell. They come from C's null, which is a billion dollar mistake[1], according to its creator: Tonny Hoare. Specially now that we have monads[2, 3].

Do you have a source for this? I was under the impression nil was directly taken from Smalltalk, which derived it from Lisp.

DanielRibeiro
Wikipedia[1] has the refs:

The null reference was invented by C.A.R. Hoare in 1965 as part of the Algol W language. Hoare later (2009) described his invention as a "billion-dollar mistake":[10][11]

[1] http://en.wikipedia.org/wiki/Null_pointer#Null_pointer

Where:

[10] http://qconlondon.com/london-2009/presentation/Null+Referenc...

[11] http://www.infoq.com/presentations/Null-References-The-Billi...

Yes, the same video I linked above.

aphyr
Nil was a part of Lisp 1, which is why I was confused. It predates Algol W by seven years. As null references and nil are somewhat different beasts, I'm not sure the criticism "billion dollar mistake" applies fully.
Oct 11, 2011 · robinhouston on LtU on Dart
There’s a strand of thinking which holds that null references are a common source of avoidable errors that the compiler could have prevented in a better-designed language. The objection is not so much that nulls exist, but that it’s impossible to declare a reference variable as “not null” in a way that the compiler can check.

Tony Hoare recently (2009) called null references his “Billion Dollar Mistake” [1, 2, 3].

1. http://qconlondon.com/london-2009/presentation/Null+Referenc... 2. http://www.infoq.com/presentations/Null-References-The-Billi... 3. http://lambda-the-ultimate.org/node/3186

Sep 11, 2011 · 2 points, 0 comments · submitted by zengr
Mar 21, 2011 · 3 points, 0 comments · submitted by josephcooney
Sep 01, 2009 · jwilliams on Null Considered Harmful
Part of the problem is that NULL is used to convey meaning. e.g. "login" returns NULL if the user isn't logged in.

So NULL can mean dozens of things. You can return something more valid (e.g. "Guest", or even "NotApplicable"), but doesn't get rid of the checks... Can result in more readable code though.

This is a good reference too: http://www.infoq.com/presentations/Null-References-The-Billi... Edit: Plus there is a discussion on this talk here: http://lambda-the-ultimate.org/node/3186

Aug 25, 2009 · 1 points, 0 comments · submitted by roder
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.