HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Types, and Why You Should Care

Jane Street · Youtube · 110 HN points · 1 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Jane Street's video "Types, and Why You Should Care".
Youtube Summary
There’s an endless debate online between advocates of typed and untyped programming languages (or statically and dynamically typed languages). These conversations often shed more heat than light, so in this talk, Ron will try to give a flame-free introduction to the practical role that type systems play in software development.

This talk is informed by the work done at Jane Street in OCaml, but he'll discuss the question in broader terms, discussing the history of typed and untyped languages, what it means for a language to have a type system, and what tradeoffs there are between typed and untyped languages. He'll also discuss how the role of types changes as your systems and dev teams grow, and how this depends on the nature and level of sophistication of the type system itself.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Step 1) Buy a lot of paper. Too many ideas, concepts, and problems in programming are really really big and we have no idea how to effectively tackle them. Being able to take notes, write down your thoughts, create diagrams and pictures, etc is invaluable in being able to learn. Being able to go back and checkout your thoughts in the past helps a lot.

Step 2) You'll want to check out these videos and pass them along as you feel they are appropriate: John Cleese on creativity: https://www.youtube.com/watch?v=Pb5oIIPO62g

Philip Wadler on the beginnings of computer science: https://www.youtube.com/watch?v=2PJ_DbKGFUA

Rich Hickey's Simple Made Easy: https://www.infoq.com/presentations/Simple-Made-Easy/

Types and why you should care: https://www.youtube.com/watch?v=0arFPIQatCU

80-20 rule and software: https://www.youtube.com/watch?v=zXRxsRgLRZ4

Jonathan Blow complains about software: https://www.youtube.com/watch?v=k56wra39lwA

I've got a list of videos and other links that is much longer than this. Start paying attention and building your own list. Pass on the links as they become relevant to things your kids encounter.

Step 3) I spent a decade learning effectively every programming language (at some point new languages just become a set of language features that you haven't seen batched together before, but don't otherwise add anything new). You can take it from me, all the programming languages suck. The good news is, though, that you can find a language that clicks well with the way you think about things and approach problem solving. The language that works for you might not work for your kids. Here's a list to try iterating through: Some Dynamic Scripting (Lua, Python, Javascript, etc); Some Lisp (Common Lisp, Racket, Clojure); C; Some Stack (Forth, Factor); Some Array (R, J, APL); Some Down To Earth Functional (Ocaml, ReasonML, F#); Some Academic Functional (Idris, Haskell, F*); C#; Go; Rust

Step 4) Listen to everyone, but remember that software development is on pretty tenuous ground right now. We've been building bridges for thousands of years, but the math for CS has only been around for about 100 years and we've only been doing programming and software development for decades at most. Everyone who is successful will have some good ideas, but there will be an endless list of edge cases where their ideas are worthless at best. Help your kids take the ideas that work for them and not get hung up on ideas that cause them to get lost and frustrated.

Mar 19, 2018 · 110 points, 135 comments · submitted by matt_d
dnautics
I strongly recommend reading about the Julia type system, which has had a lot of thought put into it:

https://docs.julialang.org/en/stable/manual/types/

Although it's "untyped" (in the sense of the video), because it's strongly typed, there are plenty of nonbrittle optimizations that are performed by the compiler on account of the type system, and the result is something that's often within 1~1.5x speed of C.

At the same time, I had to write my own numerical system and because of Julia's type system i could immediately plug it into the standard library's matrix algebra and fast fourier transform algorithms and do comparative analyses out of the box. You can even do exotic things -- I was testing some reed-solomon encoding and created a galois field type, and the standard library matrix solve algorithm worked out of the box on my custom type.

dnautics
Addendum: Also really good is this video about how one should treat vectors and matrices in a type system.

Has good lessons for python, R, and Matlab. Seriously anyone who does anything with vectors should watch this.

https://www.youtube.com/watch?v=C2RO34b_oPM

mbrodersen
There are a lot of smart, experienced, thoughtful developers who know from experience that they are more productive using types. And there are smart, experienced, thoughtful developers who know from experience that they are more productive not using types. So something interesting is going on here. Obviously this is not a situation where one of those choices is "objectively correct". It is very likely that both groups are right. So what can we learn from this? Perhaps that it depends on the developer and/or the kind of software they develop whether types make sense or not. It might also depend on how well the developer understands type systems and the kind of exposure they have had to them. I think THAT is a much more interesting conversation to have than the usual "my way is better than yours" typical conversation we have had SOOOOOOO many times.
_RPM
The only type we need is char pointer. We can then hard code the offsets in a macro to reference members of a strict.
carlmr
I'm a huge strong static typing fan. I find that I get in both situations though. For a quick script in Python, types might be a hindrance. If I write something more complex, strong types can be used to build a lot of the logic into the types, avoiding costly errors and raising maintainability in the long term.
kod
I have yet to see reliable commentary from developers who used a good static type system for an extended period of time and still willingly prefer dynamic types. For instance, Rich Hickey makes fun of how long it takes to learn Haskell. I'm not aware of him having done significant work in anything better than Java.
tome
I think Rich Hickey has significant experience with Haskell and there's at least one former Haskell user on HN who has transitioned to Clojure. I still don't quite get their point but I don't think they're speaking from ignorance.
kod
What are you basing the assertion about Rich Hickey's haskell experience on? His github doesn't have any haskell code, for instance.
tome
It was reported here by some Clojurists and I took their word for it.
victorNicollet
Given enough time in a programming environment (language + libraries + tools) a smart engineer will learn or invent techniques to write correct code faster. Given enough engineers, each environment evolves a mix of techniques (unit tests, static types, correctness-by-design, defensive programming, etc) that becomes typical of that environment. The difference between Haskell and JavaScript might be the usual cliché, but there is an entire spectrum and even within a single language the mix of techniques might change depending on the application (financial software, video games, web sites).

Moving to a different environment where those techniques are no longer as effective will result in lower productivity for a while. Moving to an environment where those techniques can no longer be used (no type-checking, no ability to unit test) results in frustration, bargaining and hopefully acceptance, but the end result is that everyone stays on their side of the fence depending on whether they identify as "relies on type-checking to ensure correctness".

By the way, funny how sometimes, we get something out of bargaining (TypeScript because there is no type-checking in JavaScript, Selenium because there is no unit-testing in graphical user interfaces).

nnq
Didn't go through it all yet, but its premise seems kind of off: you can't really discuss "types in isolation of the language"!

Just a basic example of two dynamic languages: In Python I can get a lot done and really don't miss types at all. In JS otoh, I feel lost until I pull in TypeScript to tame it, the language makes it even hard to think clearly without types, like, "is that an object or a dict" or "is it a map? from what to what?" etc. For most usual programmer mistakes that would be type errors in typed language, Python still throws a meaningful and debuggable runtime error, whereas JS carries on an breaks much further from the cause of the problems, leaving you to debug WTF after WTF...

And static typing is great, as long as it doesn't require you to type variables and anonymous or small utility functions/methods... eg. it sucks without inference!

Some languages work better with types, some don't gain much from them...

skohan
I don't know, I have not written a ton of Python, but after spending a significant amount of time with Rust and Swift, which give you adequate tools to provably eliminate an overwhelming majority of runtime exceptions, I never want to go back.
simplify
I feel the same with ReasonML / OCaml. I've spent hours on a refactor and everything work beautifully after fixing all the type errors.
seanwilson
> And static typing is great, as long as it doesn't require you to type variables and anonymous or small utility functions/methods... eg. it sucks without inference!

> Some languages work better with types, some don't gain much from them...

When you have proper type inference (which has been around for literally decades), what advantages do dynamic languages really have? I don't see any significant benefits personally. If you you're having trouble justifying at compile type a property holds, more than likely you've got a bug that's going to bite you later.

nnq
> When you have proper type inference [...] what advantages do dynamic languages really have?

You can quickly prototype by repl-ing around half-working pieces of code until you get to something that you can conceptualize. Sometimes you start from "how to" knowledge, but you don't really have the "what is" knowledge. You "know how to make it work" but you don't yet have any idea of "what it actually does". You'll get to that stage, but you need a prototype you can play with in the meantime.

Mathematicians have the hardest time groking this, so I give them the example of drawing an ellipse: let's say you start with a "working definition" of "an ellipse is that ovalishy thinggy that I get by dragging a pencil on a loop of string tied to two nails set at some distance". You know how to produce it, you want to play with it, but have no idea how to define it yet.

A dynamic language gives you the higher level equivalent of being able to "draw the damn ellipse even if you have no idea what it is"... later you can look at the code for drawing the ellipse and extract from it that "aha, it is what you get by constraining the sum of distances from 2 points to be constant!".

But some people (like me), are not very good at abstract thinking, so we need to play with half-formed stuff on screen, like "let's do this api call" and "do that and that to the returned data" and "pass it to the other thing" that "maybe someone forgot to document" and "see what it does"... and reverse engineer the abstractions from working code and maybe later properly generalize them.

I've never seen a static language with a good interactive prototyping experience so far. Though I'm playing with OCaml/Reason now, and its REPL seems quite powerful, though the language is more verbose/pedantic than I'd like it...

kazinator
One benefit is a more robust execution model, in which objects know what type they are. Given a machine word and a RAM dump, I can tell you the type of the object in that word or the location that the word points to. I don't need to know the address of the piece of machine code which is manipulating that word, which, in turn, I don't need to correlate to the matching source code via debug info.

Type inference is compatible with dynamism.

Infer all you want. Optimize and diagnose, just don't strip type from my objects, please, and don't say I can't try running something because it wasn't completely checked.

yorwba
When you're having trouble justifying at compile time that a property holds, it's not always because you don't know why it holds, but frequently a problem with the type system you have to write the justification in. Some type systems are so limited that you can't even express the conditions under which the property holds, let alone prove them. Others are powerful enough to prove anything, but that makes type inference intractable, because the inferencer might have to prove arbitrary theorems.

The advantage of dynamically typed languages is that you are not glued to one specific proof language and don't have to provide proofs for trivial properties, or one-off programs you only need to run once. There are some type systems that attempt to provide similar benefits, e.g. success typing puts the burden of proof on the compiler which will only reject a program if it can prove that the program is incorrect [1]. But as far as I know, that approach has not been included in any mainstream programming languages.

[1] https://arxiv.org/abs/1502.01278

seanwilson
> When you're having trouble justifying at compile time that a property holds, it's not always because you don't know why it holds, but frequently a problem with the type system you have to write the justification in.

Do you have any realistic concrete examples of this? I find this happens very rarely myself (maybe once every few thousand lines of code) and just involves adding a small amount of code along the lines of "throw 'Unexpected error' // this should never happen because ...".

> The advantage of dynamically typed languages is that you are not glued to one specific proof language and don't have to provide proofs for trivial properties, or one-off programs you only need to run once.

For me, type errors for where you've mixed up e.g. string, number or object variables or where you've forgotten to check if a variable is null/undefined before you use it are incredibly common compared to the rare times the type system gets in your way. You would have to (and most wouldn't) write many automated tests to catch all those potential errors if you weren't using a type system for similar robustness. Throwing all that away because of the rare case the type system obstructs you doesn't make any sense to me.

I still think if you're regularly writing code that is hard to justify to a mainstream type checker you're more than likely doing something very wrong and writing code that is hard for others to understand. I'd love to see a realistic counterexample.

yorwba
I did not intend to imply that I'm regularly writing code that's hard to justify to a type checker. In fact, most of the code I write is in statically typed languages (I'm not some kind of dynamic typing zealot), so I tend to avoid situations that could never typecheck before they occur.

However, I like to use Python for small single-purpose utilities, and in those cases I don't even bother with the optional type annotations, let alone checking them with mypy. Since those programs are so simple that a handful of test cases is enough for 100% path coverage, if the program runs correctly on one case, it's almost guaranteed to run correctly for all others. (Similar to Haskell's "if it compiles, it runs".) Statically checking for type correctness would be pointless, because you can just run the program to find out. It then becomes quite natural to e.g. apply type-modifying transformations in-place, which most type checkers wouldn't allow, since they assume a single fixed type rather than a progression of different types during different stages. (You can get around that, of course, but it requires constantly recreating essentially identical objects to give them a new type.)

For an example of a project (not written by me) where the type checker's rejection of a correctly working program precluded the use of a type checker, see http://www.oilshell.org/blog/2016/11/30.html

seanwilson
> However, I like to use Python for small single-purpose utilities, and in those cases I don't even bother with the optional type annotations, let alone checking them with mypy.

I agree for small utilities and code that's acting as glue between layers that it's convenient to have languages that are loose with what they accept because you can usually manually and exhaustively QA them and because you have no other choice when connecting languages without static type systems. Once you have a large and complex app though, strong typing makes you so much more productive that I honestly can't understand how anyone could argue against strong types.

kazinator
> e.g. success typing puts the burden of proof on the compiler which will only reject a program if it can prove that the program is incorrect

That describes any dynamic language that does type inference. Type is taken into account for diagnosis and better code, but not as an excuse for rejecting the program just because it isn't fully analyzed.

denisw
> success typing puts the burden of proof on the compiler which will only reject a program if it can prove that the program is incorrect [1]. But as far as I know, that approach has not been included in any mainstream programming languages.

Dialyzer, the optional static typing system that is of part of the Erlang distribution, is based on success typing. Here is an accessible introduction:

http://learnyousomeerlang.com/dialyzer

carlmr
>And static typing is great, as long as it doesn't require you to type variables and anonymous or small utility functions/methods... eg. it sucks without inference!

F# does this amazingly well. Rust and Scala to a certain degree.

mikelward
String versus list of string continues to be a rough edge in Python, both being iterable.
erokar
> both being iterable

Which is a great thing.

mikelward
Both being iterable is great.

String iteration yielding individual characters as strings is debatable.

A function or method having no way to specify that a parameter must be a list of string, then acting strangely when passed a single string is a problem that types can help with.

nnq
Yeah, but this is a place where consistency bites your ass (in the absence of types): when you're iterating over characters in a string you usually do something very very different then when you're iterating over a random list.

It could be solved by having a `string.get_iterator()` method (or `string.iterator` `@property`) for when you want to iterate over chars of a string to make it obvious (in the usual pythonic style). But probably this would break too much backward compatibility and only catch a few very easy to track bugs anyway.

Ironically, Javascript almost got this right with `string.charAt()`, but then it shoot itself in the foot by also adding `[]` operator access in its typical "let's add more ways to do it to be sure at least one of them is bound to be wrong in some context" style...

carlmr
Yep, that one caught me a few times.
flavio81
My experience is the same with Common Lisp: it is very strongly typed, so type mismatches never pass without an explicit error.
klmr
> In Python I can get a lot done and really don't miss types at all

No wonder, because Python has types (well … most everything does; but Python isn’t unityped [1]). It’s even relatively strongly typed. It’s just not statically typed (what the video calls “typed”).

That said, I do miss static type checking in Python, and a lot of the tooling and methodology around Python (pylint, TDD…) are a direct consequence of that lack of static checking.

— You seem to be aware of this but the distinction between “has types” and “has static typing” is actually fairly important. APIs in general often take advantage of Python’s extensible type system.

[1] See e.g. https://existentialtype.wordpress.com/2011/03/19/dynamic-lan...

franey
Python has optional static typing since at least 3.6[1]. apistar[2], a REST framework, uses modern Python's typing to serialize data, which is neat.

[1] https://docs.python.org/3/whatsnew/3.6.html#pep-526-syntax-f...

[2] https://github.com/encode/apistar

hyperpape
Robert Harper is saying the exact opposite of what you’re quoting him to say. He would 100% call Python unityped (at least prior to the new type annotations, and probably even with them).
klmr
Yes, thanks for correcting me. I completely misremembered the article.
hnzix
> TDD…) are a direct consequence of that lack of static checking.

Static checking gives you basic unit testing. TDD is far more expansive than unit testing. Static checking is not going to integration test your app for you.

carlmr
Yeah, but you can use a lot fewer tests for the same test coverage.
kqr
The things in Python you call types are traditionally referred to as tags. Tagged values are values of one type (the type in Python) but which can still be distinguished based on a run-time check of their tag.
Retra
They are types and they are traditionally called types. Tags are an implementation detail.
kazinator
Moreover, a tag is an implementation detail which rarely represents everything about a type.

E.g. a tag might tell us that some object is a function. But to know how many arguments it has, whether some are optional, and so on, we have to inspect fields beyond the tag.

clhodapp
Python programmers call them "types" but type system people don't.
willtim
Python only has one type. To quote from the article by professor Harper:

"And this is precisely what is wrong with dynamically typed languages: rather than affording the freedom to ignore types, they instead impose the bondage of restricting attention to a single type! Every single value has to be a value of that type, you have no choice! Even if in a particular situation we are absolutely certain that a particular value is, say, an integer, we have no choice but to regard it as a value of the “one true type” that is classified, not typed, as an integer."

willtim
Read the article again. He says Python is unityped. The word "type" in mathematics and computer science has a meaning which is not the same as your usage of the word.

Well-typed programs should not go wrong. Python programs can crash at runtime due to simple type errors (expected integer but got a string). A crash is a crash, from a theoretical point of view it matters not that the error is prettier than a segmentation fault. Python is not strongly typed, it just has good error messages.

klmr
Blah you’re right. That’s quite a serious oversight on my part, it’s been too long since I’ve read that article and I had apparently forgotten its core idea.
nxc18
Skimmed the video, and it's interesting.

I find the (implicit) premise that you might not already care about types hard to believe... Migrating a JavaScript project of any complexity to typescript almost always reveals errors "for free" because of the typing. Certainly any experience hacking on python would also teach you the lesson of declaring your types ahead of time.

Is this old-fashioned thinking? Are types the SQL of yesteryear (in the context of the crazy rush to "web scale" mongodb)?

joncampbelldev
Javascript and Python are very popular dynamic languages, this doesn't make them good dynamic languages.

In particular OOP is very much helped by static types.

Clojure would a better example of a dynamic language that would not be improved by adding static typing. Namespaces, functions and immutable data (as well as pervasive use of data instead of wrapping it in classes) lessens the downsides I bump into continuously when doing javascript development.

flavio81
>Javascript and Python are very popular dynamic languages, this doesn't make them good dynamic languages.

JS in particular is a very bad example of a dynamic language, the other "very bad" example being classic PHP.

This, mostly, because of weak typing.

Many of the problems attributed to C, a statically-typed language, are also due to weak typing.

seanmcdirmid
> Namespaces, functions and immutable data (as well as pervasive use of data instead of wrapping it in classes)

There are plenty of functional programming languages that believe static typing is a significant added value even under these conditions.

joncampbelldev
And some believe that the added value is not worth the trade off in mental overhead and ceremony. We are surely both aware of the various, numerous and flame-war arguments for static vs dynamic typing. (For my side I can only recommend Rich Hickey's talk "Effective Programs", he says it better than me)

My main point was that dynamic typing should be judged by it's best implementations, not by JavaScript. For example, I would not judge static typing by Java or C++.

hellofunk
>> would not be improved by adding static typing

Well, that's a matter of opinion, even by long-term Clojure veterans. There is a reason Core.Typed was developed, though it hasn't been maintained recently. The fact that Rich felt the need to give the keynote at the last Clojure conference about the dynamic vs static issue shows that it is still a highly debated topic within the Clojure world.

mbrodersen
I find it interesting that Rich (and other Clojure people) feel they need to continue defending themselves again and again. While strongly typed language communities usually don't feel the need. There must be some deep doubt in the Clojure community that triggers this.
joncampbelldev
Forgive me, but your comment seems a little disingenuous with the way it generalises the static community (every static language??) vs the clojure community.

Many of the comments on this page show why Rich gave the keynote. Because the same advantages of type checking are put forward as a reason to not use clojure over and over. They're not wrong, static typing has advantages, but I see little acceptance of any tradeoffs (or even acceptance that such tradeoffs exist: concretion of information, coupling of distant components by shared types etc etc etc I'm just paraphrasing the keynote).

He was highlighting the value proposition and tradeoffs of: being data-oriented, being dynamic and clojure.spec niceness. He felt the need to do this because he clearly felt that some people who were wavering about clojure were unsure why it was dynamic: "can't we have all this great stuff AND static types". He wanted to say "yes quite possibly you could, BUT here's the reasons why I didn't add types".

jhhh
I think it's more a acknowledgement of the current state of popularity of languages and an attempt to win people over rather than a manifestation of latent doubt.
joncampbelldev
I think the debate may be leaning towards dynamic.

- The aforementioned keynote highlighting the various reasons Rich chose to make it dynamic.

- CircleCI (one of the major users and propronents of core.typed) dropping it.

- The introduction of spec as an alternative for some of the reasons people use type systems (its certainly not a drop in replacement and doesn't intend to be).

- Spec allowing different kinds of verification not possible with a type system on its own.

rbjorklin
I’ve tried Clojure a little bit and it seems like a nice language but I’m not convinced by the lack of types. Why wouldn’t you want types? How often can you reuse a function intended for ‘ints’ with some other type? The way I see it types provide important information to anyone reading the code after it was written as well as to the compiler making it possible to catch bugs early and generate more efficient machine code.
joncampbelldev
One of the big benefits of clojure being dynamic is that everything is data (e.g. a map, set, vector or list).

This is what allows reuse.

- The vast core library of functions that manipulate those data structures can be used for everything in your program, cos it's all data.

- Most clojure libraries take and/or return data, reducing the need for clumsy adaptors, or even worse not being able to get at the data you need cos the library writer was really enthusiastic about encapsulation of everything they thought was of no use to consumers.

- You don't have a person class, you have a map with a first name and last name. Now the function that turns first + last name into full name can be reused for any other map with the same keys. (A rather spurious example, but a real one would take a large codebase and an essay to describe)

I can only recommend watching some of Rich Hickey's talks, particularly these ones, they're not entirely about types, but they express the above ideas much better than I can:

- Simple made easy https://www.infoq.com/presentations/Simple-Made-Easy

- Effective programs https://www.youtube.com/watch?v=2V1FtfBDsLU

- Are we there yet? (this one is more about OOP, but unless you're using something like haskell, idris etc its relevant for your type system of choice) https://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hi...

mbrodersen
The data types in Clojure can be very easily (and better) expressed in (say) Haskell. For example: http://tech.frontrowed.com/2017/11/01/rhetoric-of-clojure-an...
joncampbelldev
The main issue is that Haskell is not a data-oriented language by default, this means its no fun to push it to be that. For example, I also have to use java in my job, I use persistent (functional) data structures all the time, but Java is not built for it, its not fun. (Although definitely more fun that using Java's mutable structures, ewww)

Also I personally find that to be too much overhead and ceremony in return for some type checking at compile type, as opposed to spec checking at runtime.

tome
> The main issue is that Haskell is not a data-oriented language by default

What do you mean by "data-oriented language"?

joncampbelldev
In the grandparent comment's link (showing clojure data in haskell): I'm pretty sure that is not how people code in Haskell, its not how the libraries are usually designed etc etc. Using only data is definitely possible in Haskell, but it's not encouraged by default, the core abstractions are used for concretions of information.

In the same way you can do immutable and functional stuff in java, it's not going to mesh with the rest of the ecosystem or language around you.

wtetzner
> One of the big benefits of clojure being dynamic is that everything is data (e.g. a map, set, vector or list).

What about this can't be done with types? Simple parametric-polymorphism gets you pretty far. Row types allow you to handle "maps as records" in a type-safe way. The rest is just having support for some kind of ad-hoc polymorphism so that you can re-use your functions on that small set of types (type classes, ML-style functors, interfaces, protocols, etc.).

joncampbelldev
Again, I would refer you to the Rich Hickey talks, I'm not very eloquent on this. I think its about the manual overhead that constructing your hierarchy of types, plus the cognitive overhead of doing all the fancy things in your brackets.

I'm familiar with the advantages of type systems (my progression was Java -> Haskell -> Idris) but I found my personal productivity (even in larger systems built in a team) was best in clojure. I didn't feel that the guarantees given to me by the type system were worth the mental overhead, a lot of people feel differently (you amongst them I'm guessing :p)

As a closing point, if I were to ever build something that truly had to be Robust in a "someone will die if this goes even slightly wrong" way, I would reach straight for Idris and probably something like TLA+. However most of my development revolves around larger distributed systems communicating over wires, still resilient but in a different way. Mainly I use clojure.spec in core business logic and at the edges of my programs, for generative testing and ensuring that the data flowing through the system is sensible.

flavio81
>I’ve tried Clojure a little bit and it seems like a nice language but I’m not convinced by the lack of types. Why wouldn’t you want types?

Clojure has classes and types. How can it be untyped?

Machine language is untyped.

nogridbag
See my comments about Clojure.spec here:

https://news.ycombinator.com/item?id=16414942

This is all application specific, but for the types of apps I've worked on (large enterprisey OO apps) you often need various bits and pieces of domain data across different methods. So given some function, you either pass in DomainClass1, DomainClass2, DomainClass 3 (using a couple of properties of each) or you define a new class SomeSubsetOfPropertiesClass solely to call that single method. In the former case, the types do not serve as documentation for the reader as it's not clear what shape of the data is required for the function. In the latter case you're duplicating code (the properties and their types) and the class really has no meaning except as a struct to call that method.

Now that I've been working with Clojure for a little bit I find I'm able to write much more concise, testable functions and calling them is dead simple since I can work with the raw data, transforming it into the shape I need.

skohan
To be fair, the OO example you give just sounds like bad design. It should be entirely possible to avoid having to pass giant bundles of state between different layers of your application, and even if you can't you should be able to define interface conformances on your DomainClass objects to make it clear which of their members are actually relevant in a given case.
nogridbag
Absolutely it's bad OO design. Because it's very difficult to do OO right at scale. For a larger application like many I've worked on (thousands of domain classes), it's much less risky to have an anemic domain model than attempt to model a very complex domain properly. I would imagine in the Java world most applications that use ORMs tend to lean towards anemic domain models that simply mirror DB tables.
frankpf
What you want to do is possible with a structural type system. In structural type systems, type compatibility is based on the structure of the type instead of the name (as opposed to nominal type systems).

An example in TypeScript:

     interface Named {
         name: string
     }
     
     class Person {
         name: string
         age: number
         // constructor here
     }
     
     function f(obj: Named) {
          // do something with obj.name
     }


     const joe = new Person('Joe', 25)
     
     // Compiles, even though Person has an extra `age` field
     // Person is structurally compatible with Named
     f(joe)
If you pass f() a class/object that doesn't have a `name` property of type string, the compiler would catch your error.
vikiomega9
This is not easy to do in say Java which is significantly different from clojure (or how typescript enforces typing). In your example it appears that you're using the name field rather than the type Named to shape data that the function will use (this is in essence ducktyping)
andrewflnr
Right, but that's nothing to do with types in general, mostly just Java and friends with unimaginative type systems.
frankpf
I am using `Named` as an argument. In structural type systems, what matters is the type structure. It's not duck typing because it's checked at compile-time.

If I change the type of "Named" to have the fields `firstName` and `lastName` of type string, accessing any other property inside `f` (like `obj.nonExisting`) or passing objects that don't have those fields.

vikiomega9
thanks! just to clarify, how do you manage the composition of these types? Do you create a structural type per API and potentially have the "same" types repeated multiple times?
nogridbag
Interesting, except you're redefining the type of "name" in each each class by specifying "string". Also, Clojure.spec allows you to be much more precise about properties. For example, it must be a certain length, non-nil, or even match a regex (e.g. first letter starts with uppercase).
frankpf
What do you mean by redefining? I'm not redefining the type of "name". If you want to get compile-time checking that `Person` implements `Named`, you can use `class Person implements Named`.

Even better than that, you don't need to use classes, you can use "normal" JS data structures. Extending the last example:

     const john = {
         name: 'John',
         age: 25
     } // No class involved

     f(john) // compiles

     const alice = {
         age: 25
         firstName: 'Alice',
         lastName: 'Jones'
     }

     f(alice) // compile-time error

Clojure.spec is cool, but I don't see how that is incompatible with static typing. You can still have libraries that check more complex properties at runtime.

EDIT: > Also, Clojure.spec allows you to be much more precise about properties. For example, it must be [...] non-nil [...]

TypeScript also handles nulls in the type system:

    function f(x: string | null) {
        if (x != null) {
              // tsc knows that inside this if, x can't be null
              return x.length 
        } else {
              console.log(x.length) // this doesn't compile, x is of type null here
        }
    }
nogridbag
In Clojure.spec, you spec out namespaced keywords once.

    (def email-regex #"^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,63}$")
    (s/def ::email-type (s/and string? #(re-matches email-regex %)))

    (s/def ::acctid int?)
    (s/def ::first-name string?)
    (s/def ::last-name string?)
    (s/def ::email ::email-type)
Using those keywords you can define maps which specify shapes of data:

    (s/def ::person (s/keys :req [::first-name ::last-name ::email]
                        :opt [::phone]))

Functions have their own separate specifications. Here's one that accepts a person and an acctid:

    (s/fdef add-to-account
        :args (s/cat :person ::person :acctid ::acctid)
And must be called like this:

    (add-to-account {::first-name "John" ::last-name "Smith" ::email "[email protected]"} 12345)
If you tried calling it with an illegal argument and it's instrumented you will see an error:

    (add-to-account {::first-name "John" ::last-name "Smith" ::email "abc123"} 12345)
    
    ExceptionInfo Call to #'scratch.core/add-to-account did not conform to spec:
    In: [0 :scratch.core/email] val: "abc123" fails spec: :scratch.core/email-type at:
    [:args :person :scratch.core/email] predicate: (re-matches email-regex %)
Here's the difference. What if I have another function that just accepts an email:

    (s/fdef lookup-user
        :args (s/cat :email ::email)
And another which looks up by last name:

    (s/fdef lookup-user-by-name
        :args (s/cat :last-name ::last-name)
And now imagine if you wanted to accept either an email or last-name (notice it does not match our person spec). Attempting to use interfaces would quickly get out of control. You'd have to create an interface for every single property and extend interfaces to form arbitrary groups of properties.
DeepYogurt
I think it's quite reasonable to think that many developers don't have an intuition for types and thus might not care about them. Also given the title I think this is meant to be an introductory discussion and the premise is then valid.
Zyst
For the most part I feel that for 99% of problems types just “get in my way”.

It feels like I already know what is going to go in where, and having to type it is just a waste of time.

That said, when I have to work with APIs that can not guarantee their data integrity throwing in a quick @flow annotation of top of the file, and taking the time to write out what is optional has proven very valuable to make sure my functions, and subsequent code is not going to throw.

Another reason I like flow more is because I don’t need to convonce my team to use it, I can just use it myself, and then delete it after I am done writing my code.

skohan
I used to feel this way, but the benefits of a good type system far outweigh the drawbacks IMO. A lot of runtime errors caused by dynamic types get pushed back to compile time, which is a much safer and easier time to deal with them. Nowadays writing code without type checking feels like building a house on quicksand.

There's also the self-documenting aspect of strongly typed languages: for instance, if I look up the documentation on a javascript API, I have to hope function parameters have been specified well, otherwise I just have to guess what should be passed in or dig through the source. With a strongly typed language I probably get that information as part of the autocomplete hint.

And good type systems can be powerful tools. In Swift for example, the protocol system is powerful enough that I'm sure it results in writing less code overall, not more.

flavio81
Please don't assume that the problems that are specific to Javascript also happens with other dynamically typed languages.
spraak
But why don't they? Aren't the problems nearly the same? And if not, how?
carlmr
They do, JavaScript is just the worst offender.

Python has the dynamic typing problems, but is better because it has a somewhat strong type system. It will tell you when it's wrong, but often too late (oh this function only gets called wrong once every 100h on my server, thank you for telling me now that it went wrong). And sometimes different types can duck into the same function (e.g. string and list are both iterable).

So yeah, Python suffers from the same issues when you want to scale your code, but it's A LOT less bad than JavaScript.

flavio81
>But why don't they? Aren't the problems nearly the same? And if not, how?

Concrete example: Common Lisp is very strongly typed. So it almost never automatically converts from type to another, except when it makes total sense (i.e. the square root of -1 will return a complex number).

So, if there is a type mismatch, it will be caught (at runtime) raising an exception. Now, you would think "yeah but my statically typed language will check this BEFORE the code runs". Yes, but in Lisp, the exception doesn't terminate the execution, it enters a mode in which the system asks you what to do next.

Thus, what you do is, you go back to the source code, to the function with the error, you correct that specific function, compile that specific function (which happens almost instantly), and then resume execution of your program. This means that the previously "invalid operation" will be run again but with the new definition of your function, thus the code will continue running without said bug.

So, all in all, it's very nice to use...

skohan
That sounds very nice to work with, and for tools/scripts running on my own workstation it sounds great.

For user-facing code running at scale catching more errors at compile-time still sounds better to me.

nawitus
The types will make reading the code multiple times easier when someone else reads the code. Or when you go back to the code six months later.
filterfish
It's hard to overstate this point.
walshemj
Don't take this the wrong way, but I suspect you are young developer who hasn't realised that putting in the extra work up front pays large dividends down the line.
nogridbag
Or perhaps there is no correct way to write software and is very dependent on the problem domain and the type of application. I wonder how many projects went over budget, are incredibly over-engineered and ultimately failed because some senior developers read a book on DDD and went crazy with types.
meuk
As pointed out in the video, types can be helpful for bigger projects, where others have to read your code, and sometimes for refactoring.
mbrodersen
For the most part I feel that 99% of problems are easier to fix with types. Especially large scale refactorings.
Zyst
As a side note: The TDD-ish alternative to this, which I also use sometimes is just adding a test where you pass in undefined on your optionals, and then iterating until the test stops throwing.
pdpi
Unfortunately, “has no obvious bugs” is not quite the same as “obviously has no bugs”. TDD gives you the former, static typing the latter (within what can be represented by the type system, of course)
kerkeslager
Yes, but "within what can be represented by the type system" is a fairly large caveat. Type systems often can't represent intent.

  int* increment(int* i, int step)
  {
    return i + step;
  }
Is there a bug here? Well, that depends on the intent. Did we really intend to add an int to an int*?

Even Haskell's type system can't tell what you intend:

  increment i = i + 11;
Did we really intend to add 11 rather than 1, or is that a typo?

A unit test would clarify the intent of both these functions, and catch both bugs fairly reliably (if they aren't the intended behavior).

Ultimately I think TDD and types guarantee different things, and both are useful/needed.

naasking
You have to be willing to use types to express intent. Types are logical propositions, so you have to encode your proposition as a distinct type. You can actually prove many programs correct by exploiting even Java's poor type system:

Proving Programs Correct Using Plain Old Java Types, http://lambda-the-ultimate.org/node/5387

kerkeslager
Okay, can you say how you would reasonably catch the typo in my second example with types?

Obviously you have to be willing to use types to express intent, but even if you're willing, types are limited in what kinds of intent they can express.

naasking
If the context is Haskell, I'd probably user type-level naturals to encode the correct input/output types.

In other contexts without polymorphic types, like with C, I'd wrap the input and output types in custom structs that expose trusted operations. The more minimal the better. It's definitely more cumbersome without polymorphism, but you do what you can for the code that's mission critical.

kerkeslager
Okay, I didn't ask clearly enough. The following function is supposed to take x and return x + 1:

    increment x = x + 11;
How would you catch this bug via a reasonable usage of the type system? Noting that a unit test catches this bug trivially.
naasking
I know what you asked, and I answered as best I could without further information. The solution ultimately depends on the specific type system. Like I said, with Haskell I'd use type-level naturals to ensure the output type is 1+input type.

In C, my first thought would be to define a static value representing ONE, checked via static_assert, and then the increment function becomes input + ONE.

kerkeslager
> Like I said, with Haskell I'd use type-level naturals to ensure the output type is 1+input type.

I'm a bit out of my comfort zone with this one so I may be wrong, but wouldn't that require a lot of coding to define type level naturals for all the possible values of X?

> In C, my first thought would be to define a static value representing ONE, checked via static_assert, and then the increment function becomes input + ONE.

Okay, but do you agree that in this case a unit test would be a better solution?

I'm not saying this as a types versus unit tests sort of thing. I'm saying both are needed tools for writing reliable software.

naasking
> I'm a bit out of my comfort zone with this one so I may be wrong, but wouldn't that require a lot of coding to define type level naturals for all the possible values of X?

Not for your simple example. See [1] for more info. There are a number of packages that do the work for you.

[1] https://wiki.haskell.org/Type_arithmetic

> Okay, but do you agree that in this case a unit test would be a better solution?

Depends how mission critical the property is. Even simple increments and offsets might deserve some type-level encoding if they are core, mission critical properties.

For other things, tests are fine, although I recommend property-based testing frameworks like QuickCheck, Hypothesis, etc. which test logical properties against a large range of input values instead of a static set of values encoded into your tests.

whateveracct
> wouldn't that require a lot of coding to define type level naturals for all the possible values of X?

It’s a one-liner in Haskell (two if you count the DataKinds pragma):

  data Nat = Z | S Nat
dllthomas
On the one hand, in GHC you already have type level naturals.

On the other hand, I don't see that they'd actually be useful for catching this error in any way people are likely to use, although I'd be interested to see attempts...

On the third hand, the way I'd actually implement this in Haskell is probably `succ`, so typo would be caught during compilation. But of course you can construct alternative examples.

To my mind, types and tests are very much complimentary - ideally tests check that your solution gives correct answers at some points in your domain, while types help make that space more uniform (so if it's correct at some points it's more likely correct at others). There are some practical cases where the types sufficiently constrain things that checking any actual points is redundant, but that's not the common case.

kerkeslager
> To my mind, types and tests are very much complimentary - ideally tests check that your solution gives correct answers at some points in your domain, while types help make that space more uniform (so if it's correct at some points it's more likely correct at others). There are some practical cases where the types sufficiently constrain things that checking any actual points is redundant, but that's not the common case.

Yeah, this is my ultimate point, and I worry that some people have misunderstood my intent. I'm not criticizing types: I think they're a very important tool. I'm saying that types don't handle every kind of possible error. And conveniently, tests often cover the kinds of errors that types don't (and vice versa).

wtetzner

    int* increment(int* i, int step)
    {
      return i + step;
    }
I think your example is flawed. The real problem here is that you're allowed to add an int to an int* with +. int* is not be of type int, and should therefore require some sort of cast to make it possible to add an int to it. Either that, or require a different operator/function to add to a pointer, e.g.:

    int* increment(int* i, int step)
    {
      return prt_add(i, step);
    }
Not all type systems are equal, and how the type system interacts with the language is important.
kerkeslager
That's my first example, not my second. I specifically asked about the second example because it's fairly obvious that C's type system is garbage.

To be clear, my question is, how would you reasonably catch the typo with types in the following Haskell function?

    increment x = x + 11;
...noting that this typo is reliably caught by a unit test.
tome
How do you catch this broken unit test?

    def test_increment(x):
        assertEqual(increment(x), x - 1)
The answer to your question is, you can't. In this case the implementation is the specification. You're going to have to set a more meaningful challenge.
kerkeslager
> How do you catch this broken unit test?

    def test_increment(x):
        assertEqual(increment(x), x - 1)
This isn't a trick question. You run the test, and see that it fails.

If you write the wrong test or the wrong type AND the wrong implementation, they won't help you, but the point of both types and tests is that you have to make two mistakes for a bug to get into production. It's possible that in your test, I could have made the same error in the implementation of `increment`, but it's a great deal less likely than making that error in just the implementation or just the test.

And to be clear, I'm not saying any of this as a "types versus unit tests" thing. I've specifically said that both are useful and needed for reliable software.

> The answer to your question is, you can't. In this case the implementation is the specification. You're going to have to set a more meaningful challenge.

You totally can catch this bug with some type systems (Coq, for example), it's just much easier to catch this bug with a unit test in most languages.

I think that your boss would probably disagree with you that this bug is not meaningful if it makes it into production. You don't get to pick and choose which bugs are meaningful because they don't support your views.

tome
My point is, when the program is the same as the specification then there's essentially no point writing either tests or proofs. Therefore, I think your point would be better demonstrated using a more complicated example.
kerkeslager
This is getting a bit off topic and into the theory of how you write tests, but if you wrote a better test it wouldn't be the same as the specification.

    def increment(x):
        return x + 1

    def test_increment():
        assert increment(0) == 1
        assert increment(1) == 2
        assert increment(42) == 43
        assert increment(-5) == -4
This is admittedly overkill, but the point is you really should only be testing inputs and outputs in a test, not doing calculations.

> Therefore, I think your point would be better demonstrated using a more complicated example.

The example is a simplification of a bug I came across last year:

    available_date = datetime.date.today() + datetime.timedelta(days=11)
I won't paste the full test here because this is part of a larger function, but rest assured I'm not duplicating the definition of the function in the test.

The intern who wrote the code didn't know how to use mocks to mock out `today` so they didn't write a test, resulting in this bug. I remember this case because I use it as an example to teach mocks.

I simplified my original example to make it clearer, but I think you'll see that this real-life bug suffers from the same difficulties if you try to verify it with a type system, but is (fairly) easy to test.

EDIT: Actually, the relevant bits of the test were fairly simple (from memory so please excuse errors):

    @unittest.mock('datetime.date.today')
    def test_available_date_set_to_tomorrow(self, today):
        today.return_value = datetime.date(1984, 4, 20)

        [...]

        ticket_claim = claim_ticket(user, voucher)

        self.assertEqual(
            ticket_claim.available_date,
            datetime.date(1984, 4, 21),
        )
tome
Aha, yes, OK, I agree. Good example.
dllthomas
Ideally, it's like double-entry book keeping. If you try to express the same thing in two different places, ideally in two different ways, you're less likely to make the same mistake in both places than to make any mistake in the first place. This applies both to types and tests.
kerkeslager
Exactly!
seanmcdirmid
Type languages, especially static ones, are very inexpressive in the kinds of logical propositions they can express. You can try type hacking (working an inexpressive into more expressive propositions), but the results are often not usable in real systems.
naasking
The type language's expressiveness is definitely more limited than the term language, but "very inexpressive" is overstating the case. You end up grouping C, for which your claim is absolutely true, along with Agda for which your claim is not really true.

Regardless, even with Java's limited expressiveness you can encode some powerful propsitions, as the paper I linked shows.

seanmcdirmid
It may be that COQ and Agda have expressive type systems, but who is writing programs with them? They are not general purpose.

The paper you linked uses Java's type system for a mechanized proof, meaning...you probably don't want to be writing that out by hand.

naasking
> It may be that COQ and Agda have expressive type systems, but who is writing programs with them? They are not general purpose.

I think that's overstating it a little too. You don't have to use the dependent types, at their core, Coq and Agda are still functional languages and you can just stick to algebraic sums and products and still enjoy type inference. Most people aren't using these languages for general purpose programming because a) poor tooling, and b) because they are explicitly marketed as research languages.

seanmcdirmid
More than that, they are explicitly meant to be proof assistants.
kerkeslager
Yeah, that paper is interesting, and I've definitely leveraged Java's types to verify some pretty powerful assertions.
None
None
kerkeslager
I think what you're talking about here is static types getting in your way, not just types getting in your way.

Strong/weak types and dynamic/static types are really two different spectrums. People conflate strong types with static types and weak types with dynamic types, but they aren't really the same thing.

Static/dynamic just has to do with whether the types are checked at compile time or run time. Examples: Static: C, Haskell. Dynamic: Javascript, Python.

Weak/strong has to do with what kinds of checks the type system does. A strong type system is capable of checking a lot of different things for you. Static types are often stronger, but not always: for example, C is statically typed but its type system checks hardly anything: int* + int is perfectly valid. A list of programming languages from weakest-typed to strongest-typed might look something like: JavaScript, C, C++, Common Lisp, Perl, Scheme, Ruby, Python, Java, C#, OCaml, Haskell.

Static types are nice for projects which will grow large and where bugs are a big problem, but I think for the average HN person, static types aren't really necessary.

Strong types, on the other hand, are extremely useful. Even in a dynamically typed language, they aid in debugging a lot, because type errors occur much closer to where they're caused. In Python, for example, `"foo" + 42` immediately fails. But in JavaScript, you don't get an error until much later, perhaps when your webpage is mysteriously displaying "foo42".

Of course, there's a small cost to strong types: in the case where I actually do want to append a number to a string, I have to do `"foo" + str(42)`. But I think people tend to overstate this cost because it's visible. But if you look at the big picture, typing five extra characters takes a lot less time than debugging almost anything.

naasking
Strong/weak is generally not a useful distinction because it has no formal meaning. What you're probably after is "expressiveness" and "soundness". So C's type system is unsound and it's types are inexpressive. Haskell's type system is sound and moderately expressive. Agda's type system is sound and expressive.
kerkeslager
> Strong/weak is generally not a useful distinction because it has no formal meaning.

"This doesn't have a formal meaning, therefore it's not useful" is quite a logical leap you've got there.

I've got over a decade of professional programming experience in which "strong types" is a useful enough concept to help me do my job. "Soundness" certainly gives stronger guarantees, but it's more than I've needed.

I'm not aware of "expressiveness" having a formal meaning, but my informal definition has functioned for me so far.

naasking
> "This doesn't have a formal meaning, therefore it's not useful" is quite a logical leap you've got there.

It means that everyone defines strong and weak in their own way, which has happened in every type system debate I've seen over the past 15 years.

> "Soundness" certainly gives stronger guarantees, but it's more than I've needed.

Soundness is exactly the metric you need. Either you can rely on your type system not to lie to you, or you can't.

Programming language expressiveness means "Felleisen expressiveness". It's a metric approximated by source code compression, which is why the great programming shootout includes gzip metrics for source programs.

kerkeslager
> It means that everyone defines strong and weak in their own way, which has happened in every type system debate I've seen over the past 15 years.

Okay, that's fair. Part of the reason I gave examples and defined the spectrum I was talking about was to address this problem, because I know people don't necessarily know what I'm talking about.

klmr
Even static typing very rarely gets in the way (it makes a few programs harder to express; but I find this relevant much less frequently than people think; and even then I’d argue that expressing the types still adds value). What gets in the way, as the video argues, is excessive explicit typing. But languages can get around this with type inference.
nybble41
Also, among statically typed languages there is a significant difference between languages which make the programmer write out all the types and languages which offer type inference. I think most of the comments about types "getting in the way" come from experience with the former (C, C++, Java, C#). In a strongly- and statically-typed language with type inference (such as Haskell) even moderately complex programs can be written without any type annotations. The result looks a lot like a dynamically-typed program, but static types are still inferred and checked for consistency at compile-time. For example, this is a perfectly valid Haskell program:

    import Control.Monad
    import Control.Monad.Tardis
    import System.Random
    
    -- A solution for the Trapping Rain Water problem employing
    -- the bidirectional state ("Tardis") monad.
    -- https://www.geeksforgeeks.org/trapping-rain-water/
    volume hs = sum . flip evalTardis (0, 0) . flip traverse hs $ \h -> do
       x <- min <$> getPast <*> getFuture
       modifyForwards  (max h)
       modifyBackwards (max h)
       pure (max 0 (x - h)) 
       
    main = replicateM_ 20 $ do
       n  <- randomRIO (3, 10) 
       hs <- replicateM n $ randomRIO (0, 10 :: Integer)
       putStrLn $ show hs ++ " => " ++ show (volume hs) 
Note that the only explicit type in the entire program is the `:: Integer` annotation on the upper bound for randomRIO. Without that annotation the program would be underconstrained, since the input to `volume` can a list of any type with Ord and Num instances.
iamjs
better audio: https://www.youtube.com/watch?v=yVuEPwNuCHw
TwoNineFive
OP link should be replaced with this one.
meuk
The video is an okay discussion of type systems and the pro's and cons, but for me it didn't bring anything new to the table.
superlopuh
I think a lot of problems with types could be avoided if the package management system / linking system required typing of imported functions, even if the actual language doesn't.

Maybe there'll be a "safe" subset of the pip/npm registry that will only let typed APIs in, to promote the concept.

As a side-note: is there data on how many dependencies there are of framework on npm, on average? If it's anywhere past two, I'd say that the extra time spent typing is more than compensated by the time saved by the user of the API.

None
None
TwoNineFive
FKING VOLUME WARNING !!!

This video will blow your fking eardrums out because at the 4:37 minute mark, the presenter switches on his microphone system and the input source of the video changes to be about 400% what it was previously.

The fact that nobody has mentioned this is a pretty strong indication that none of these people have actually watched this video.

nordsieck
This[0] slide from Rich Hickey's Effective Programs talk[1] has really got me thinking about the benefits of costs of typing.

  "The Problems of Programming"

  Domain Complexity
  Misconception

  10x

  Place Oriented Programming
  Weak support for information
  Brittleness/Coupling
  Language model complexity
  Parochialism/context
  weak support for Names
  Distribution
  Resource utilization
  Runtime intangibility
  Libraries
  Concurrency

  10x

  Inconsistency
  Typos
There are some type systems, for example Rust's, which integrate RAII into the language as a first class concept and are able to address some of the upper level concerns like concurrency and place oriented programming. In general, however, there are many problems on this list where more complex type systems doesn't help, or even makes it worse.

[0] https://twitter.com/stuarthalloway/status/926065084652228609 [1] https://www.youtube.com/watch?v=2V1FtfBDsLU

dkarl
To be fair, there are languages where typos and inconsistency naturally fall in the middle tier unless programmers invest silly amounts of time writing and maintaining unit tests to push them down into the third tier where they belong. I can't blame Rich Hickey if he doesn't think much about those languages, but it's an important qualification for people to make when they read this slide in the context of their own work.
lewisl9029
I feel the context is important here.

This slide is not a comment on the frequency of these problems, because typos are definitely one of most if not the most frequently occurring issue in the day-to-day work of most developers, and I'm sure Rich Hickey realizes that.

Rather, this slide is a comment on what he defines as the "severity" of these problems, and one of the important manifestations of severity is the cost of getting these wrong.

With the exception of certain business domains where getting things perfectly right the first time is paramount, such as financial transactions and safety/security-critical programs, the cost of typos and inconsistencies is generally minuscule because they're practically costless to debug and fix. In fact, they often cost so little that we end up instinctively fixing typos and inconsistencies countless times throughout our day to day development processes, often without ever even making a conscious effort to identify and fix them. Even if the occasional typo makes it past our tests and into production, these issues are almost always trivial to debug and fix (and if they're not, and you don't happen to be in of these domains where getting things right the very first time is paramount, then oftentimes that can be the smell of some deficiency in your deployment/monitoring setup, or a manifestation of some other more subtle, but more severe problems listed on the slide, such as brittleness/coupling, place oriented programming, weak support for concurrency/names, etc).

Of course, being able to outright eliminate these entire classes of errors is a legitimate benefit of a static type system, and Rich Hickey acknowledges this benefit later in the talk. If you're in one of those business domains where you absolutely cannot afford to let typos and inconsistencies sneak their way into your production systems because the they'd be disproportionately costly to fix, or result in consequences that you're not willing to accept, then static typing can be very useful for providing those safeguards.

However, I think he also brings up an important point that static type systems and the act of flowing types around in your system is often a significant source of coupling, and that coupling is a much more severe problem when it comes to maintaining a piece of software over the long term. So it's important for each team to assess that tradeoff carefully in order to gauge if they're willing to take on that additional level of coupling in exchange for eliminating the possibility of typos and inconsistencies from reaching production.

thesz
You may decrease coupling with the use of interfaces or type classes.

This way you do not use concrete types in the construction, you link things with requirements on/between the parameters.

Please take a look at the Expression Problem: https://en.wikipedia.org/wiki/Expression_problem

This thing is an essence of question how to design low coupled system when initially high coupling is expected. There are solutions for most languages and, probably, for your statically typed language of choice (from statically typed languages).

s6o
After watching that talk, I would have loved somebody in the audience to point out instance like this https://raygun.com/blog/10-costly-software-errors-history/ and ask for Hickey's comments.

Needless to say I really don't agree with that list, as history shows the inconsistency, typos, lack basic units and the ability to express them in a computation etc., have been more catastrophic than he believes them to be. And these issues cannot really be fixed without a proper type system, that will not allow stuff like that to get into production.

And a good type system cannot be optional, e.g. clojure's spec, There are number of studies that show that will power, is not enough, there needs to be and environment - a type system - that helps to avoid and catch mistakes.

sooheon
I see nothing in those 10 costly errors that fundamentally disagree with Hickey's ranking, or advocate clearly for the need of type systems.

I also question the value of taking too much stock in outliers. How many of us will be working on the next Mars orbiter?

flavio81
>After watching that talk, I would have loved somebody in the audience to point out instance like this https://raygun.com/blog/10-costly-software-errors-history/ and ask for Hickey's comments. Needless to say I really don't agree with that list, as history shows the inconsistency, typos, lack basic units and the ability to express them in a computation etc., have been more catastrophic than he believes them to be.

From the list you cite, only three of 10 errors are caused by "typos", "lack of basic units" and type errors.

And this is just a particular list.

Recalling Rich Hickey list, the things that cause more problem in code, from worst to less damaging:

  Domain Complexity
  Misconception

  10x

  Place Oriented Programming
  Weak support for information
  Brittleness/Coupling
  Language model complexity
  Parochialism/context
  weak support for Names
  Distribution
  Resource utilization
  Runtime intangibility
  Libraries
  Concurrency

  10x

  Inconsistency
  Typos
I'm pretty sure that if we compile a list of real-life cases where the major reasons of failure according to Rich Hickey have caused trouble, the list of examples would be immense, compared to "typos".
vanilla_nut
I haven't written as much Rust as I'd like, but one of my favorite things about the language (there's a lot of competition for "favorite" things about Rust, to be fair) is the way that types are required in function signatures, but not within functions. So if you can see the declaration of something, you can always tell what it is -- it's right there. But if it's done in a function somewhere, or if you're just trying to get a sense for the types passed around in a library, it's all on the surface of the signatures. I personally find the lack of declared types in python/javascript absolutely debilitating on large projects, so that really appeals to me.
carlmr
You should check out F#! it has amazing type inference (even for functions in some simple cases, although I prefer my functions with types). The type system is still strongly typed and static. So it looks Pythonic, and behaves C#-ic.

https://fsharpforfunandprofit.com/posts/type-inference/

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.