HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Maybe Not - Rich Hickey

ClojureTV · Youtube · 172 HN points · 16 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention ClojureTV's video "Maybe Not - Rich Hickey".
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
You may have seen it already but Rich Hickey the creator of Clojure had a great talk about this https://youtu.be/YR5WdGrpoug
yakshaving_jgt
I’ve seen a few of his talks, but I do not accept his position on types more generally. I wrote a rebuttal to a similar talk he gave a few years ago.

https://jezenthomas.com/rich-hickey-doesnt-know-types/

amatecha
Huh, good post, thanks. What are you usually working with, Haskell? Or I saw Elm mentioned on your blog. I've been considering learning Clojure as something new to help broaden my skills (currently using TypeScript in my day-to-day and some noob-level knowledge of Haskell). Your post has me reconsidering, haha >_>
yakshaving_jgt
Yes, I mostly work with Haskell. I'm the CTO at Supercede and our project is currently ~100,000 lines of Haskell code.

I think every developer ought to at least learn Elm. Being forced to think so lucidly about the types and effects of your systems is unreasonably effective, and I think it shapes the way you then write code in other languages.

Fair enough, I thought it was snark so I responded with snark.

I think you should try both. Lisp as a language gives you a lot more freedom than Clojure, but Clojure has a more consistent and elegant language design. Clojure focuses on mostly-pure functional programming with immutable data structures, while Lisp lets you do pretty much anything anywhere, including hacking deep into the runtime itself (Clojure lets you do this to some extent too, but not as much as Lisp).

Some videos that are worth viewing, which together I think provide a good sense of how the languages differ:

- "Common Lisp for the Curious Clojurian" by Alan Diepert. A very lucid side-by-side comparison of Clojure and Common Lisp; assumes some but doesn't really require much knowledge of Clojure. https://www.youtube.com/watch?v=44Q9ew9JH_U

- "Maybe Not" by Rich Hickey. An unusual but thoughtful take on "types" in programming, which I think cuts to the heart of what Clojure is all about. https://www.youtube.com/watch?v=YR5WdGrpoug

- "Immutable Conversations - Common Lisp" interview with Alejandro Serrano and Michał "phoe" Herda. Demonstrates the deep programmability and flexibility of the Common Lisp runtime, including its unique "condition system". https://www.youtube.com/watch?v=pkqQq2Hwt5o

qPM9l3XJrF
Thanks!
Jul 29, 2021 · j-pb on How Dwarf Fortress is built
"Maybe Not" talks about these compositional ideas: https://www.youtube.com/watch?v=YR5WdGrpoug
Sep 13, 2020 · tantaman on From Rust to TypeScript
Please please please don't perpetuate the use of either or maybe types :(

They make software very hard to maintain and end up breaking anyone who depends on a library that uses them.

Rich Hickey (author of Clojure) has a great discussion of Either and Maybe here: https://www.youtube.com/watch?v=YR5WdGrpoug that dives into their unsoundness.

darksaints
The author of a dynamically typed language thinks strongly typed constructs are bad? You don't say...
mattnewton
Rick’s argument seems to be that we should have better tools in the language type system. If your language doesn’t give you those tools, these are still very useful types. In addition, the library maintenance issues he raises seem trivially solvable with languages that allow function polymorphism or have refactoring tools to me.
Because you mentioned Result specifically: I've recently seen a talk by Rich Hickey that (for me) made a pretty strong case against Result/Maybe/etc. and for union types instead. Maybe you'll find it interesting: https://www.youtube.com/watch?v=YR5WdGrpoug
Jun 14, 2020 · 2 points, 0 comments · submitted by tosh
Rich Hickey's talk "Maybe Not" has some interesting thoughts on this: https://youtu.be/YR5WdGrpoug
I would say Clojure more than substantial returns of invested time. Not just the language itself, but also the wider approach to software composition, feature accretion vs backward compatibility, schema'd dynamic typing seem benefictial no matter what your development tools you use. Maybe check out the talks below and see if they don't enRich you as software engineer.

https://www.youtube.com/watch?v=ROor6_NGIWU The Language of the System

https://www.youtube.com/watch?v=MCZ3YgeEUPg Design, Composition, and Performance

https://www.youtube.com/watch?v=oyLBGkS5ICk Spec-ulation

https://www.youtube.com/watch?v=YR5WdGrpoug Maybe Not

I'd be interested to hear your thoughts on this perspective: https://youtu.be/YR5WdGrpoug?t=1058
Feb 17, 2019 · 1 points, 0 comments · submitted by mromnia
I bounce back and forth between TypeScript and Clojure. I've been keeping a close eye on Clojure's upcoming spec as it seems to be a nice opt-in, run-time type-checking solution. I like the flexibility and quickness with which dynamic languages let me iterate, but I want the safety. I want at least some of the benefits of both.

So yes, they (dynamically typed languages) have some advantages. Sometimes when I'm iterating quickly, especially on new projects, I don't want to fight with the compiler. BUT, I vastly prefer TypeScript to JavaScript because JS has a lot of foot-guns. I think when we get the existential operator, it will have less foot-guns, but I will probably never again write a new project for work in JS from scratch.

I really love the ideas Rich Hickey put forth in his most recent Clojure conj talk: https://www.youtube.com/watch?v=YR5WdGrpoug

The ability to spec out the shape of an object, and then in the different contexts in which you need some of that data, specify what you need, is an awesome solution to optionality (Watch the talk!). I absolutely love how much thought RH has put into the design of Clojure, and I think the final release of spec will be a close to perfect balance of dynamism and opt-in safety.

Of course, TS is a superset of JS, so adopting it little by little sorta gets me that balance I want.

sime2009
> The ability to spec out the shape of an object, and then in the different contexts in which you need some of that data, specify what you need, is an awesome solution to optionality

You realise you can do the same thing in TS with its structural type system?

wvenable
> I don't want to fight with the compiler.

The only time you're ever fighting the compiler in a modern statically typed languages with type inference is when you've made a mistake.

SmooL
Or you're trying to do something fancy that the compiler/type system doesn't support
farresito
Is it really the compiler/type system's fault, though? Sounds more like a language's design or specification flaw.
wvenable
If you're doing something so "fancy" that the compiler/type system doesn't support it, you should probably re-think it.
Nimelrian
Higher kinded types are currently not supported by the TS compiler and would be a highly appreciated feature.
nicoburns
Perhaps, but if that means doing it in a more cumbersome way, or with mire boilerplate, then that would be an area where dynamic languages have an advantage.

IMO, one of the geeat things about TS is that you can opt out of the type system for specific functions if you really want to.

zukzuk
Nah, there are still some patterns, especially around metaprogramming, that TypeScript doesn't support. One example is the inability to refer to the current class inside a static method. This makes it really awkward to write things like factory methods, which in turn forces you to either resort to copy & paste coding, or fighting with the compiler. See https://github.com/Microsoft/TypeScript/issues/5863 for example.
"Maybe Not" by Rich Hickey: https://www.youtube.com/watch?v=YR5WdGrpoug

Another excellent talk by the creator of Clojure, and like the previous ones, relevant for all programmers.

maxhallinan
I thought this was one of the notably bad talks this year. The whole premise that a function of Maybe a should be a function of a without an API change is neither intuitive to me nor really justified by Hickey. Different things are different. It's sad to see someone build such a wall around himself when faced by something (type theory) that he doesn't understand.
gnuvince
The sad thing is that Rich Hickey had some very good videos when Clojure was a new thing back in 2008–2009. Unfortunately, I've disagreed vehemently with most of his talks since then. In this case, it's completely illogical that a function `Maybe a -> b` should be callable as if it were a function `a -> b`. Do you want to know how I know? Because it would be just as illogical to allow a function `Vec a -> b` to be called as `a -> b`. And Rich must agree because Clojure itself does not support that!

I've learned that videos of his talks are just not worth my time.

StreamBright
I think Rich does not like Some x | None because he does not like simple pattern matching too much. This is why Clojure does not have a first class pattern matching syntax (you can emulate, and there is a library and it is just X amount of lines, etc. but still).

In this regard I really like OCaml:

    let get_something = function
      | Some x -> x
      | None   -> raise (Invalid_argument "Option.get")
This is very simple to understand and reason about and very readable. The examples Rich was trying to make in the video I could not tell the same about. He kind of lost me with transducers and the fact that Java interop with Java 8 features is rather poor.
CleanShirt
I was surprised it was a seperate library in Clojure and doesn’t seem to be something that gets used much. Puts me off that it’s missing one of the most attaractive features of functional languages.
owl57
Because it would be just as illogical to allow a function `Vec a -> b` to be called as `a -> b`. And Rich must agree because Clojure itself does not support that!

Maybe Clojure's standard library just isn't that focused on vectors? Python stdlib doesn't support this as well, but NumPy does.

nadagast
Why is it illogical to say that a Maybe a -> b should be callable as if it were a -> b?

His point is that Maybe a should be composed of all the values of a, plus one more value, nil. A value of type a is a member of the set (nil + a). Why should having a more specific value reduce the things you can do with it? It breaks composition, fundamentally. It's like saying (+) works on integers, but not on 3. I'm saying this someone who really enjoys type systems, including haskell.

StreamBright
My problem is that I don't know when to expect nil from a call because in Java null is part of every type and you can happily receive a null from anything, the compiler won't give you a warning. In OCaml I know what to expect because Some x | None is super simple to reason about. I can never receive a null a nil or other things that somehow satisfy the type requirements. Clojure is great with untyped programming everything is an Object after all but I still would like to see a reasonable thing like Erlang's {:ok,x} | {:error, error} or OCaml's Some x | None. It is not an accident that many languages that like reliability implemented it like that.
nadagast
Yes, the default of a lot of languages, (Java, C, etc) where nil is implicitly a member of every other type is a bad default. But that's a separate question.
joel_ms
> Why is it illogical to say that a Maybe a -> b should be callable as if it were a -> b?

Fundamentally because it would require you to conjure up a value of type b from nowhere when the Maybe a is Nothing. If we view the function type as implication this would not be valid logically without some way of introducing that value of type b.

You could imagine some function from Nothing -> b that could rescue us. But since it only handles one case of the Maybe type, it is partial (meaning it could give undefined as an answer). There is basically two total functions that we could change it to:

   * Maybe a -> b in which case we are back where we started.
   *  Unit -> b which essentially is just b, which can be summed up as meaning we need some kind of default value to be available at all times.
So to be able to call Maybe a -> b as a -> b you would need some default value available at all the call sites for a -> b

Now this is only "illogical" because we don't postulate a value of type b to be used as this default.

> It's like saying (+) works on integers, but not on 3

No, it's like saying (+) must work on all integers AND a special value nil that is not like any other integers, but somehow included in them and all other data types. We can't do anything with this nil value since it doesn't carry any data, so in the case of (+) we would essentially have to treat it as an identity element.

This is good though, since (+) has 0 as an identity element, so we can just treat nil as a 0 when we encounter (+). However, when we want to define multiplication we still need to treat nil as an identity element (since it still doesnt carry any data), except the identity element for multiplication is 1. This would be repeated for every new function that deals with integers.

So by mashing together Maybe and Integer we have managed to get a frankenstein data type with an extra element nil which sometimes means 0 and sometimes means 1.

Why not just decompose them into Maybe and Integer and supply the default argument with a simple convertion function like fromMaybe?

(FWIW, I actually agree with Hickey that using Maybe in api design is problematic and I've encountered what he's talking about. But while that might be an argument for where he wants to take Clojure, it's not an argument for dismissing type theory the way he does.)

owl57
You got it backwards. These problems arise when you want to use an (a -> b) function as (Maybe a -> b), not vice versa.
None
None
joel_ms
Yeah, you're right, I got confused when interpreting the parent comment. Thanks for pointing it out!

I guess I overlooked it because the other way is so logically trivial, since it basically boils down to A -> B => (A || Nothing) -> B, which is just an or-introduction. So if you wanna implement Maybe generically the "work" lies on the other side.

But since Hickey's argument sort of is that we shouldn't implement Maybe generically, I guess my argument here becomes circular. (Begging the question maybe?)

nadagast
> I guess I overlooked it because the other way is so logically trivial, since it basically boils down

Yeah, that's (part of) Hickey's point. That the "best" type systems fail this test, and require manual programmer work to solve this problem. Again, I'm saying this as someone who really appreciates Haskell.

dragonwriter
> His point is that Maybe a should be composed of all the values of a, plus one more value, nil

No, that's a simple union type. There are very good reasons for Maybe to be different than unions (Maybe can nest meaningfully, simple unions can't.)

Maybe Maybe a is valid, and often useful, type.

Of course, if you have a function of type a -> b and find out you need a more general Maybe a -> b, instead of a breaking change, you just write a wrapper function that produces the correct result for Nothing and delegates to the existing function for Some(a) and you're done without breaking existing clients.

(Now, I suppose, if you're u had something like Scala implicits available, having an implicit a -> Maybe a conversion might sometimes be useful, though it does make code less clear.)

nadagast
I agree that there are reasons for Maybe a to be a different type from (a | nil) but there are also good reasons to prefer (a | nil). Like most things, it's a set of tradeoffs. What I appreciated about this talk was that he went into the benefits of thinking about types in this way. It's (relatively) common to see the benefits of Maybe a explained, but more rare to see the benefits of full union types explained.
agentultra
I agree.

Further his exposition on `Either a b` was built on a lack of understanding of BiFunctors.

The icing on the cake was his description of his own planned type theory. What he described was, as I could decipher from his completely ignorant ravings, a row-based polymorphic type system. However he passes off his insights as novel rather than acknowledging (or leveraging) the decades of research that have gone into type theory to describe the very system he is trying to build.

Worse, he continued to implore his audience to spread FUD about type theory, claiming several times, "it is wrong!"

amelius
Well, in his defense, type theory is supposed to make software engineering simpler, not more difficult. So if even he doesn't understand it, then how can we expect a random programmer to?
StreamBright
Well software engineering is the only engineering field that produces half baked unreliable crappy solutions continuously. Other fields cannot afford such attitude towards products. I think a simple (inferred) type system is pretty useful to increase correctness and it helps you to increase reliability (even though it does not avoid all of the mistakes you can make in software).
chubot
Are you sure that he doesn't understand it, or is it possible that you haven't worked with the same scale of systems he has?

Here's my response to his talk, which I found insightful:

https://lobste.rs/s/zdvg9y/maybe_not_rich_hickey#c_povjwe

Also, I think your comment suffers from the problem here, where you invoke "type theory" without any elaboration:

https://lobste.rs/s/zdvg9y/maybe_not_rich_hickey#c_ioeyob

Rich has taken the time to explain his thoughts very carefully, and his clear about what his experience is, and domains he is talking about. Whereas I see a lot of vague objections without specifics and that aren't backed up by experience.

agentultra
I haven't seen any of Rich's contributions to any major open source Haskell project. I can't really speak to his experiences with Haskell in proprietary code bases. Has he been a prolific Haskell contributor/hacker/user?

From his talk he hasn't convinced me that he understands Haskell's type system. Not only does he misunderstand the contract given by `Maybe a` but he conflates `Either a b` with logical disjunction which is definitely a false pretense. He builds the rest of his specious talk on these ideas and declares, "it [type theory] is wrong!"

He goes on to describe, in a haphazard and ignorant way, half of a type theory. As I understood it these additions to "spec" are basically a row-based polymorphic type system. Why does he refuse to acknowledge or leverage the decades of research on these type systems? Is he a language designer or a hack?

I can't even tell to be honest. He has some good ideas but I think this was one of his worst talks.

joshlemer
I think this comment is a bit too harsh, and I would rather we just discuss the points he makes, not his credibility. The man has decades of experience in software system development and architecture, and has built one of the most popular programming languages in the world, as well as Datomic. If that hasn't given him the right to give his opinion during the keynote of a conference for the language he built, then I don't know how much you want from him.

To apply your own standards, have you been a prolific Clojure contributor/hacker/user? Have you contributed to any major open source Clojure projects?

Can you actually say what about Maybe/Either he got wrong because it seems like he understands it perfectly well, speaking as a Scala developer.

agentultra
> I think this comment is a bit too harsh, and I would rather we just discuss the points he makes, not his credibility.

I was trying to address the points he made but parent appealed to his authority which I haven't found convincing.

> To apply your own standards, have you been a prolific Clojure contributor/hacker/user? Have you contributed to any major open source Clojure projects?

I have been a user on a commercial project. It's a fine enough language. I wouldn't call myself an expert. And I haven't given a keynote address where I call out Clojure for getting things I don't understand completely wrong.

> Can you actually say what about Maybe/Either he got wrong because it seems like he understands it perfectly well, speaking as a Scala developer.

His point about Maybe was misguided at best. The function `a -> Maybe a` is a different function than `a -> a`. Despite his intuition that the latter provides a stronger guarantee and shouldn't break code, callers may be expecting the Functor instance that the `Maybe` provides and therefore is a breaking change.

His point about `Either a b` was perhaps further from the mark. It is not a data type that represent logical disjunction. That's what the logical connective, disjunction, is for. Haskell doesn't have union types to my knowledge. Either is not a connective. It's a BiFunctor. His point that it's not "associative" or "communtative" or what-have-you simply doesn't make sense. In fact he calls Either "malarky" or, charitably, a "misnomer."

To his credit he says he's not bashing on type systems, only Maybe and Either. But he later contradicts himself in his conclusions. He goes on about reverse and how "the categoric definition of that is almost information free." And then, "you almost always want your return types to be dependent on their argument..." So basically, dependent types? But again, "you wouldn't need some icky category language to talk about that."

So again, I think he has some surface knowledge of type systems but I don't think he understands type systems. I'm only a beginner at this stuff and his errors were difficult to digest. I think if he wanted to put down Maybe and Either he should've come with a loaded weapon.

He's had better talks to be sure! I just don't think this one was very good. And in my estimation was one of the poorer talks this year.

ajss
> The function `a -> Maybe a` is a different function than `a -> a`. Despite his intuition that the latter provides a stronger guarantee and shouldn't break code, callers may be expecting the Functor instance that the `Maybe` provides and therefore is a breaking change.

I don't really follow that. How can it be a breaking change? Can you give an example?

agentultra
If you're building a parser, you may use: http://hackage.haskell.org/package/parsec-3.1.13.0/docs/Text...

Where your downstream parsers match on `Nothing` and assume the stream hasn't been consumed in order to try an alternative parser or provide a default.

If you change an equation to use `option` instead you have a completely different parser with different semantics.

I was thinking of a case where I use your function in a combinator that depends on the Functor and Monoid instances provided by the `Maybe` type. If you change your function to return only the `a` and it doesn't provide those instances then you've changed the contract and have broken my code. And I suspect it should be easy to prove the equations are not equivalent.

joel_ms
>His point about `Either a b` was perhaps further from the mark. It is not a data type that represent logical disjunction. That's what the logical connective, disjunction, is for. Haskell doesn't have union types to my knowledge. Either is not a connective. It's a BiFunctor. His point that it's not "associative" or "communtative" or what-have-you simply doesn't make sense. In fact he calls Either "malarky" or, charitably, a "misnomer."

I don't agree with Hickey, but isn't there a connection between Either as a basic sum type and logical disjunction via the curry-howard correspondence?

And wouldn't "forall a b. Either a b" be the bifunctor, since it has a product type/product category as it's domain, while the result "Either X Y" (where X and Y are concrete types, not type variables) has the semantics of logical disjunction ie. it represents a type that is of type X or type Y?

agentultra
Yes, there is a connection. Which makes it all the more strange: Either does correspond as you say which means it has the same properties as the connective. In the correspondence the functor arrow is implication and a pair of functions ‘a -> b’ and ‘b -> a’ is logical equivalence. Using these we can trivially demonstrate the equivalence of Either to the associativity laws using an isomorphism.

That’s what makes his talk strange. He talks about types as sets and seems to expect Either to correspond to set union? If he understood type theory then he’d understand that we use isomorphism and not equality.

You can express type equality in set theory and that is useful and makes sense.

But it doesn’t make sense in his argument.

Malarky? Come on. Doesn’t have associativity? Weird.

joshuamorton
But protos and Haskell's type system solve different problems.

Perhaps, maybe, public apis like protos shouldn't encode requiredness of any piece of data (I actually fully agree with this).

But that says nothing about whether or not my private/internal method to act on a proto should care.

Another way of putting this is that requiredness ahiuslnt be defined across clients (as in a proto), but defining it within a context makes a lot of sense, and maybe/optional constructs can do that.

Or iow, other peopleay use your schema in interested and unexpected ways. Don't be overly controlling. Your code on the other hand is yours, and controlling it makes more sense. So the arguments that rely on proto-stuff don't really convince me.

(I work at Google with protos that still have required members).

chubot
> But protos and Haskell's type system solve different problems.

That's pretty much my point. Hickey is very clear what domains he's talking about, which are similar to the domains that protos are used in -- long-lived, "open world" information systems with many pieces of code operating on the same data.

People saying "Rich Hickey doesn't understand type systems" are missing the point. He's making a specific argument backed up by experience, and they are making a vague one. I don't want to mischaracterize anyone, but often I see nothing more than "always use the strictest types possible because it catches bugs", which is naive.

I agree with your statement about private/internal methods. I would also say that is the "easy" problem. You can change those types whenever you want, so you don't really have to worry about modelling it incorrectly. What Hickey is talking about is situations where you're interfacing with systems you don't control, and you can't upgrade everything at once.

maxhallinan
You're right: there isn't much substance in my comment. I just meant to qualify the parent comment by saying that not everyone found this to be a "Best talk of 2018". A lot of good arguments for both sides were made in other places and it felt a bit obnoxious to reopen the argument here, where it's off topic. I should have simply stated that this is a controversial talk instead of adding my two cents.

Here's my understanding of one of your points: required fields in a data serialization format place an onerous burden on consumers. So in proto3, every field is optional, and this permits each consumer to define whats required for its own context.

Unfortunately, I can't find any connection between the dilemma at Google and the suitability of the Maybe type. You say this:

>The issue is that the shape/schema of data is an idea that can be reused across multiple contexts, while optional/required is context-specific. They are two separate things conflated by type systems when you use constructs like Maybe.

I agree - the value of a field might be a hard dependency for one consumer and irrelevant to a second consumer. But Maybe has nothing to do with this. If the next version of protobuf adds a Maybe type, it would not obligate consumers to require fields that they treat as optional. It would just be another way to encode the optionality, not optionality as a dependency but optionality of existence. A required input could still be encoded as a Maybe because the system can't guarantee it's existence. So Maybe is simply an encoding for a value that isn't guaranteed to exist. And that's exactly the scenario you described in proto3 - now every field could be encoded as a Maybe.

A second point that stuck out to me:

>I didn’t understand “the map is not the territory” until I had been programming for awhile. Type systems are a map of runtime behavior. They are useful up to that point. Runtime behavior is the territory; it’s how you make something happen in the world, and it’s what you ultimately care about. A lot of the arguments I see below in this thread seemingly forget that.

Your worldview here is very different from my own, and perhaps while this difference exists, there won't be much mutual understanding. I don't find any relationship between types and anything I understand as "runtime behavior". Types are logical propositions. The relationship between programs and types is that programs are proofs of those propositions. Runtime does not enter into the picture. That's why constraint solvers work without running the program.

chubot
If that was your intention, simply saying "I didn't find this talk useful" would suffice. It's not necessary to say that "Rich Hickey doesn't understand type theory".

I would say that "X doesn't understand type theory" is becoming a common form of "middlebrow dismissal" [1], which is discouraged on HN.

And that's exactly the scenario you described in proto3 - now every field could be encoded as a Maybe.

No, in protobufs, the presence of fields is checked at runtime, not compile time. So it's closer to a Clojure map (where every field is optional) than a Haskell Maybe.

This is true even though Google is using statically typed languages (C++ and Java). It would be true even if Google were using OCaml or Haskell, because you can't recompile and deploy every binary in your service at once (think dozens or even hundreds of different server binaries/batch jobs, some of which haven't been redeployed in 6-18 months.) This is an extreme case of what Hickey is talking about, but it demonstrates its truth.

I don't find any relationship between types and anything I understand as "runtime behavior".

Look up the concept of "type erasure", which occurs in many languages, including Haskell as I understand it. Or to be more concrete, compare how C++ does downcasting with how Java does it. Or look at how Java implements generics / parameterized types.

[1] http://www.byrnehobart.com/blog/why-are-middlebrow-dismissal...

joshlemer
>that a function of Maybe a should be a function of a without an API change is neither intuitive to me nor really justified by Hickey

He spent many minutes motivating it. If you support some functionality to a client (as a library, or a service, or more abstractly), and then later want to relax the constraints that you require from those clients, this should be an acceptable change that they shouldn't have to worry about / be broken by. Like if all of a sudden some REST api no longer requires a specific HTTP header to be present in requests, then this change shouldn't break clients that have been including that formerly-required header all along.

Similarly, if you provide something to clients, and you add functionality to provide a response in all circumstances, not just some, then this should not break clients either.

This clearly is not true of `Maybe` / `Option[T]` and I think it's a pretty fair critique. Maybe we should be using Union types in these situations instead.

patrec
His argument for spurious API breakage is strictly logically correct, but seems practically dubious to me. When have you ever had to unnecessarily break API compatibility because something you thought was an Option[T] result turned out to be really a T, always? Maybe I'm wrong and someone will post some convincing examples, but I currently don't see this as a problem that needs solving.

Union Types don't compose straightforwardly because they flatten; maybe this is desirable in some cases but the rationale in Hickey's talk leaves me unconvinced. The only practical use case for union types I'm aware of is smoother interfacing to dynamically typed APIs; any nice examples for something beyond that?

IshKebab
I think you're kind of both agreeing. It would be nice if Maybe did work like he suggested, but in practice it's not that big a deal.
gavinpc
> When have you ever had to unnecessarily break API compatibility because something you thought was an Option[T] result turned out to be really a T, always?

That would be a breaking change. And should be, if you're into that sort of thing.

The objection is to the opposite case: What was a T is now an Option[T]. I don't know Scala specifically, but that's a breaking change in every typechecked language I know. Rich is arguing that it shouldn't be. But it could be possible even in typed languages through union types. For example, you can do this in TypeScript by changing T to T | undefined, which is a superset of T.

patrec
Nope, it's not the opposite case, I was just to lazy to spell it out. Which way around it is depends on whether it's a return value or parameter. Covariant vs contravariant. If it's a parameter an API change from T to Option[T] shouldn't break (you require less), whereas with a return type it's from Option[T] to T (you promise more).
gr__or
To be fair the "result"-part in "something you thought was an Option[T] result turned out to be really a T" makes it sound like you were speaking of the return-type to me as well. I appreciate the elboration though!
gavinpc
Yes, that was my fault, I overlooked "result," and I also appreciate the clarification.

I think the point remains that, while this is not a breaking change from a contractual viewpoint, most type systems would deem it incompatible.

bfung
After having code reviewed a lot of Haskell code and managing library dependencies, his talk makes a TON of sense.

Some refactorings of code are basically just relaxation of requirements or tightening of return values - and Maybe is littered everywhere / changed everywhere. It just makes the the code hard to read and a lot of busy work - but to no real value.

This is the same in Java code. Too many Optional<MyActualClass> / Nullable everywhere. Unit tests littered everywhere to deal with Optionals. But no real functionality change or new information for future maintainers. Just extra cruft.

spec seems to be work picked up where Optional / Maybe has left off.

None
None
noxecanexx
I actually went ahead to learn haskell after one of his recent talks(my motivation was the vehement rejection from the haskell community..on reddit mostly) I haven't turned back since then. It's either he doesn't really understand how types are used or he's intentionally "bad-mouthing" types(which I don't think he's doing)
AlexCoventry
The second half, about his work on spec, is great.
geokon
Are the things he illustrates already implemented in Clojure? I'm kinda new to Clojure and I see there is deps.edn and there is clojure.spec, but have they come together into the system he describes? Or is that still "in the works"?

I was left a little vague on how it'd work in the end. I guess your program would specify what spec/input-output it expects from a library's API and then the library git history is traversed till you hit the last "version"/hash that matches the spec you require. Then we can just get rid of version numbers entirely and I guess you would just get a warning when a library made a breaking change and your dependency is stuck on an old version/hash in the git tree

Am I understanding that right? I get that the new way of development he describes would prevent breaking changes entirely (though it honestly sounds messy with lots of legacy stuff floating around)

olodus
Was about to suggest this myself. I see a few that didn't like this and I can see why, but I thought it was interesting to listen too. Always interesting to hear someone knowledgeable argue for something you don't think you agree with. It is also interesting since Rich is in the quite unique position of handling a functional language with dynamic types (a dying breed in the functional space imo). I don't think Rich fully convinced me to change my mind, but he made me think about when and why Maybe is needed.
sifoobar
Dying breed based on what? They're different compromises, there is nothing inherently superior about static types. It's more popular right now, because we're very much into rules at the moment; but I have a feeling that's peaking.

Typically, you need to move back and forth a number of times before it clicks. So you have all these people on the net fighting over which end of the spectrum is the Right one, and next month they're on opposite sides.

They're tools, hammers, screw drivers whatever; it doesn't make any kind of sense to identify with them. Unless your goal is to be a tool, I guess.

olodus
Wow wow chill out. It wasn't meant as a negative comment. I was just making the observation most new languages are staticly typed and that a lot of what you hear in the functional space is about types (type theory, dependant types, categories...) though this might be just what I've had infront of me recently. I retract my statement. I love the dynamic functional languages. Erlang and Clojure are amazing (now that I think about it elixir is a lot in the news, as I said my statement probably was wrong). And as I said in my previous comment, I think Rich talk made some really interesting points.
olodus
And I don't consider clojure dying at all btw. In the field of data science especially it is thriving. I was tired ok... So... There...
nickpsecurity
There is one advantage of formal specs in general that static types inherent: they can enforce a correctness policy on a design that works for all inputs with no runtime overhead. SPARK Ada eliminating entire classes of errors with mostly-automated analysis is one example. Rust blocking temporal and some concurrency errors without GC is another. I'll note they're both hard to eliminate with testing, too.

With that, comes the next benefit: less debugging and lower maintenance costs. It has to be balanced against annotation costs. Most projects claimed to come out ahead if it was just safety rather than full correctness. You can also generate tests and runtime checks from specs to get those benefits.

So, for high reliability with lower maintenance, there's definitely an advantage to knocking out entire classes of error. That leads to last advantage Ill mention: ability to warranty (market) and certify (regulators) your code free of those defects [if the checker worked]. The checkers are also tiny on some systems. Rarely fail. Inspires extra confidence.

sifoobar
Sure, in return for bending code over backwards to fit rules you get guarantees; but pretending that is anything but a different compromise isn't helping. If Ada was all that, everyone would be using it; and that's not going to happen, because it's not a realistic approach to programming.

The academic approach doesn't look very constructive to me, never did. Once you have a language nailed down so hard that errors aren't possible, the complexity of dealing with it will be more or less the same as dealing with the errors in the first place.

I would much prefer a focus on more powerful tools that fit into the current "unsafe" ecosystem while offering a more gradual and flexible path to improved safety.

nickpsecurity
"If Ada was all that, everyone would be using it"

Argument for popularity = superiority. By your same logic, we shouldve stuck with COBOL for important apps since all the big businesses were using it. I have a feeling you dont write new apps in COBOL.

"the complexity of dealing with it will be more or less the same as dealing with the errors in the first place."

The best empirical comparison of C and Ada showed the opposite. All studies showed the safer languages had less defects with usually more productivity due to less rework later on. Evidence is against your claim so far.

"I would much prefer a focus on more powerful tools that fit into the current "unsafe" ecosystem while offering a more gradual and flexible path to improved safety."

Me too. Rust and Nim are taking that approach. People are finding both useful in production so far.

fulafel
I think dynamic FP programming is actually doing quite well these days. Apart from Clojure & ClojureScript, there's Erlang and Elixir, and the Racket community, and in addition it's become popular to write FP code in Javascript using Ramda, immutable.js & etc - not to mention the FP nature of React.
slifin
Fundamentally changed how I think about modelling data
reikonomusha
I don’t like Rich’s appeal to human intuition and the physical world as justification for not using—and even disparaging—mathematical structures. His quote from memory was something like “There’s no such thing as a Maybe Sheep,” and that was aggravating to listen to. He then gives the simplest possible examples, and fails to convince me that his approach scales to the real world. At some point he starts talking about what a “person” is and what an “address” is and, somewhat in jest, he says something like “I know I know addresses are difficult this is just a simple example.” It’s precisely the difficult case that I want to see his “intuitive” view of programming actually work out.

Rich is a good speaker but I don’t find the line he toes to be agreeable.

zengid
He's a genius but I think he's getting bitter about how people are happily sticking with static strongly typed languages. At the end of the day he's trying to sell people on using Clojure, so of course he's going to rip on other languages.
Rich Hickey's "Maybe Not" should be watched by anyone who thinks nulls/nils/undefineds are okay. It should also be watched by anyone who think that Optional/Maybe/Nullable and good enough:

https://www.youtube.com/watch?v=YR5WdGrpoug

Yet `Maybe<T> | Option<T> | ...` is not an option (pun intended), as Rich Hickey explains here: https://www.youtube.com/watch?v=YR5WdGrpoug.

In effect, his argument is: 1) You have `public X Do(Y y)` changed into `public X Do(Option<Y> y)`. This will break your API. 2) You have `public X Do(Y y)` changed into `Option<X> Do(Y y)`. This will break your API.

Thus, do not use Option<T> or equivalent in your API's. Only use a language-supported construct such as C#8's upcoming `string?` and `string`.

bunderbunder
This is a spot where I've got to respectfully disagree with Mr. Hickey.

Changing a public API call that used to guarantee that it returned a value so that it might now return nothing is a breaking change, and, as an API consumer, I want my APIs to broadcast that change loudly. Compiler errors are a good (but not the only) way to do that.

Changing a public API member so that its arguments are now `Maybe[T]` is just silly. There's no need to introduce a breaking change there. Just overload it so that you now have versions that do and do not take the argument and get on with life.

If there's an argument to be made here, it's that statically and dynamically typed languages require different ways of doing things. In a statically typed language, I expect the compiler to keep an eye on a lot of these things, and I'm used to leaning on the compiler to catch things like a function's return value changing. In a dynamic language, I'm not.

I'm also, when working in a dynamic language, used to having to deal with the possibility that, at all times, any variable could contain data of literally any type. Removing nullability there changes the set of possible "this reference does not refer to what I expected" situations from (excuse the hand waving) a set with infinite cardinality to a set whose cardinality is infinity minus 1. If you think of NULL as effectively being a special type with a single value (call it "void"), then eliminating it reduces the number of classes of errors I have to worry about in a dynamic language by 0. I'm hard pressed to see any value there.

tatut
This is backwards. Rich did not advocate for changes that break promises.

The point in the talk is that "strengthening a promise" should not be a breaking change. Changing return type from "T or NULL" to always returning T. The case where you previously couldn't guarantee a result, but now you can.

The other case "relaxing a requirement" also should not be a problem. The case where you previously had to give me a value, but now I don't need it and can do my calculation without it.

bunderbunder
TBH, I'm happy with that being a breaking change, too. Just keep returning a T? that happens to always have a value until the next major version # increment (or whatever), and then make the breaking change, and then I get a clear signal that I can delete some lines of code.

The alternative seems like a path that, in any decently complex software project, ultimately leads to an accumulation of useless cruft that'll probably continue to grow over time as people keep copy/pasting code that contained the now-useless null-handling logic.

Maybe Not - Rich Hickey (clojure), 29 nov. 2018

https://www.youtube.com/watch?v=YR5WdGrpoug

https://dotty.epfl.ch/docs/reference/intersection-types.html

kybernetikos
Yes, 'maybe not' is very relevant to this discussion, but few people seem to agree with my understanding of what he says about the right solution:

Optionality doesn't fit in the type system / schema, because it's context dependent. For some functions, one subset of the data is needed, for others a different subset. Trying to mash it into the type system / schema is just fundamentally misguided.

gambler
That's exactly what he says. People might disagree that this is the right approach, but I'm not sure what other ways anyone could interpret that talk.
fmjrey
Yes, he's rather explicit in saying Maybe is a poor tool. I'll have to watch the talk a second time to be sure, but I'm not sure he proposes any solution at the level of type systems. Not using Maybe or using Union is not what he is advocating. For him (and me too) types are the wrong thing to put data in because, among other things, it forces you back into PLOP. His point is to remove entirely the need to fill slots with nothing. Obviously the talk is more about specs than types. While tactfully avoiding the debate around types, he's still starting the talk with types to help those that are only there to decomplect their thinking.
fmjrey
Yes I was going to mention that video in which Rich makes an important point in my opinion: database tables, or objects, or structs, still live within the Place Oriented Programming (PLOP) mindset. That mindset was born in a time where disk and RAM were the expensive resource therefore update-in-place was the default. I insist on the "in-place": you need to know where something is so you can update it. The downside of PLOP is that if you have no value for one of the slot in your generic form (be it a table, object or struct) then what can you put there?

The alternative is to use data shapes that do not require something to be in a certain place. Hence the use of maps as the most basic data shape: you either have an entry in it, or you don't, no need to have a null entry. Expanding that thinking to databases, and you realise tables is not the right aggregate, instead you need to go one level down to something that datomic calls datom, or RDF calls a fact.

To summarise, PLOP forms that package together a set of slots to be filled magnify the issue of NULL/null/nil. Instead make the slot your primary unit of composition and make sure you use aggregation of slots that does not force you to have slots filled with a null value when there is no value in the first place.

Conj always makes me so envious. So many good talks.

This one, the Rich Hickey one on Maybe, the Stuart Halloway one on REBL.

Any other must watches?

Hickey: https://youtu.be/YR5WdGrpoug Halloway: https://youtu.be/c52QhiXsmyI

kinleyd
Indeed. And it seems like ages since the last one, so I'm really enjoy the current gush of talks.
Nov 30, 2018 · 5 points, 0 comments · submitted by juliangamble
Nov 30, 2018 · 4 points, 0 comments · submitted by tosh
Nov 30, 2018 · slifin on Clojure REBL [video]
While we're talking about this conference, rich hickey just changed how I think about domain modelling forever with this talk, released about an hour ago: https://m.youtube.com/watch?v=YR5WdGrpoug

Strongly recommend a watch particularly if you're modelling something in a typed language

ledgerdev
Agreed, just finished watching, and that's an amazing talk.
cwhy
I was thinking of the exact same thing when writing my new project in Python, and implemented a poor-man's version using typing.NamedTuple. But my idea was mainly from the extensive records from Elm. Although Elm also have Maybe, but the idioms of "making impossible things unrepresentative" eliminates a lot of unwanted usage.
Nov 30, 2018 · 160 points, 42 comments · submitted by kgwxd
dwohnitmok
Hickey's `Maybe` example feels misguided (about breaking existing callers). If Maybe is an input just keep the original function around and provide a new one that wraps it.

    originalF :: x -> y

    f :: Maybe x -> y
    f None = newDefaultBehavior
    f (Some x) = originalF x
If `Maybe` is an output then existing callers SHOULD be broken, because they must now handle a new possibility of failure they didn't before. The fact that this doesn't happen in Clojure is actually a source of pain for me.

It's also rather interesting that Hickey also comes to the same conclusion that optionality doesn't belong in a data structure, but for slightly different reasons: https://news.ycombinator.com/item?id=17906171

jtmarmon
> If Maybe is an input just keep the original function around and provide a new one that wraps it.

The problem with that is now you have to maintain two functions that do the same thing just to satisfy the type system.

> If `Maybe` is an output then existing callers SHOULD be broken,

He agrees with you on this, according to the talk

dwohnitmok
1. They don't do the same thing though, i.e. it's not just a wrapper. Dumb wrappers are a problem though in a lot of typed languages in other cases.

2. Ugh this is what I get for watching a video while coding on the side. You're right.

JBiserkov
>If `Maybe` is an output then existing callers SHOULD be broken, because they must now handle a new possibility of failure they didn't before.

Look at the example again. The output was Maybe y "yesterday", "today" the output is y. Existing callers shouldn't have to change - there is no new possibility they need to handle - they no longer need to deal with the possibility of Nothing.

kybernetikos
It's an interesting one - they shouldn't be broken, but they should update their code when they can since they have a bunch of code to deal with a case that can't happen any more.
matt-noonan
Even without wrapping, I'm not sure that "it breaks existing callers" is such an awful thing. Yes, it is slightly annoying. And if it was a language like Python, making a function suddenly start returning None is going to silently break existing callers, which is pretty awful. But in a language like Haskell, the compiler will find every broken callsite immediately, and the cleanup needed on the caller side is very straightforward.

All that aside, in my experience the case of parameters becoming optional or return values losing nullability actually does not happen that much in practice. It's a bit of a strawman.

remontoire
This only works if you own the codebase. If a library maintainer changes the signature from X to maybe then it breaks all of the users of the libraries code.

The problem with breaking changes is that the person who breaks things doesnt feel the pain.

matt-noonan
No, I'm arguing that this kind of breaking change is fine-ish, because the compiler helps the end users upgrade their code to the new API in a very straightforward way.

In other words, I'm arguing that this kind of breaking change is (1) rare, and (2) ok anyway if you have a strong enough type system that the compiler will just tell you what callsites need modification, and the modification is simple (as in this case).

robto
I think "don't break anyone's code" > "break code but tell user about" > "silently break code".

Rich Hickey has very strong feelings about breaking other people's code, and I tend to agree with him about: don't do it. Breaking other people's code is mean and unnecessary.

matt-noonan
I don't disagree, except to say that "don't break anyone's code" is a bit better than "break in a way that the compiler catches and helps you fix", which is a GREAT deal better than "silently break code".

Paranoia about breaking code makes sense in languages that have few mechanisms for aiding in refactoring. It makes less sense when the language is acting as your teammate.

visibletrap
I'm an outsider of Elm community, but from my perspective, breaking changes in Elm 0.19 is still a pretty big deal where lots of people complain about.

Talking about refactoring, I think it's much easier when you are working with a codebase that built by composing lots of small pure functions together. The place where it gets tricky is the part with side-effects where you have to carefully test it either manual or automate anyway. Type error will likely to be caught during that process. With Clojure Repl, we can test that out pretty quickly.

Skeime
Elm’s 0.19 update was not super-well handled because it had quite a few breaks that are not easily handled (removing all custom operators, which make certain code structures different, disallowing Debug in packages where the new (arguably correct) way requires restructuring, etc.) Just introducing Maybes in a function parameters or removing them in return types would have been quite painless, I think.
boogiewoogie
Maybe someone figured out they would get half the response the first time and the rest the second time.
yen223
If x represented a set with some values

  x = {RED, GREEN, BLUE}
It would be really swell if the optional type `Maybe x` were a strict superset:

  Maybe x = {RED, GREEN, BLUE, None}
This means functions that map `Maybe x -> y` can accept values of type `x` without issue.

But in Haskell (and OCaml) it's not. What we actually have is something more like

  Maybe x = {Some(RED), Some(GREEN), Some(BLUE), None}
which is a totally different set from `x`, which is why functions which shouldn't break, ultimately do.
dwohnitmok
Yep that's why as Hickey says people are excited about union types. But in the context of a refactor or a change to an API where you care about breaking existing users, there's a perfectly valid way forward.

Basically I don't think Option types are as unwieldy as Hickey is making them about to be. Indeed Dotty (one of the examples in the talk) will be keeping Option even in the presence of union types (mainly because union types fall down with nested Options and with polymorphism).

That being said Haskell (and Scala) is annoying when it comes to trying to combine different kinds of errors together, if you want an error type more expressive than Option.

OCaml here (and Purescript) have a much better story with polymorphic variants.

ernst_klim
In OCaml you have polymorphic sum and product types, so the function

    val f : [`Red | `Green | `Blue | `None ] -> y
Would accept a value of type [`Red | `Green | `Blue ] quite fine.

Though I'd still prefer a haskell false positive to a clojure's false negative. Also, if something is considered a string, I expect it to be any string but NULL.

tel
The downside of that (which can be provided by union types) is that `Maybe (Maybe X) == Maybe X` which may or may not be what you're after. The particular weakness is the interplay with generics where `forall t. Maybe t` depends upon the `Maybe` and the `t` not interacting. If you've got `forall t. t | null` then this type may secretly be the same as `t`---and if you handle nulls then it might conflate meanings unintentionally.
ndh2
So you're saying that the one Nothing has a different meaning than the other Nothing, and that this is something that anybody would want? Or that we have a Nothing and a Just Nothing? I'm sorry, but I don't get how that would be considered good design in any context. Do you have any example where this would be considered sensible?
tel
I am.

In any concrete context, Maybe (Maybe A) could _probably_ be simplified to just Maybe A as we expect. Alternatively, we could be in a situation where there are two notions of failure (represented here as Nothing and Just Nothing) in which case we'd be better off simplifying to Either Error A where Error covers you multiplicity of error cases.

But while these are all obvious in a concrete context, what we often are doing is instead compositional. If I want to write code which works over (Maybe a) for some unknown, library-user-declared type `a` then I may very well end up with a Maybe (Maybe Int) that I can't (and mustn't) collapse.

As a concrete example, consider the type of, say, a JSON parser

    ParserOf a = Json -> Maybe a
We hold the notion of failure internal to this type so that we can write

    fallback : ParserOf a -> ParserOf a -> ParserOf a
which tries the first parser and falls back to the second if the first results in error. We might also want to capture these errors "in user land" with a combinator like

    catch : Parser a -> Parser (Maybe a)
If we unwrap the return type of `catch` we get a Maybe (Maybe a) out of a compositional context (we can't collapse it). Additionally, the two forms of failure are distinct: one is a "handled" failure and the other is an unhandled failure.
JadeNB
> In any concrete context, Maybe (Maybe A) could _probably_ be simplified to just Maybe A as we expect.

And doing so is just what `join :: m ( m a ) -> m a` specialises to for `m = Maybe`!

tel
That's just one way to do it. Another might be

    collect :: Maybe (Maybe a) -> Either[Bool, a]
JadeNB
But you mentioned simplifying `Maybe (Maybe a)` to `Maybe a`, which your `collect` doesn't do (at least not directly).

(Also, shouldn't `Either[Bool, a]` (which I don't know how to make sense of) be `Either Bool a`? Even with this signature, I'm not sure what the implementation would be, and Hoogle doesn't turn up any obviously correct results ( https://www.haskell.org/hoogle/?hoogle=collect https://www.haskell.org/hoogle/?hoogle=Maybe+%28Maybe+a%29+-... ), but that's probably my fault.)

dllthomas
I think the obvious implementation is `maybe (Left False) (maybe (Left True) Right)`

If we have a value then we clearly have both layers. If we don't have a value then we need to distinguish Nothing from Just Nothing by way of the book.

JadeNB
Oh, I see; we're thinking of `Maybe a` as `a + ()`, and then identifying `(a + ()) + ()` with `a + (() + ())` and then `() + ()` with `Bool`. Thanks!
dllthomas
Right, exactly!
dllthomas
Similarly, a polymorphic function may want to put values of an unknown type in a Map or MVar. From the outside it may be completely reasonable to call that function on Maybe Foo, which would mean a Maybe (Maybe Foo) somewhere internally and I would struggle to call that bad design.
lostmsu
Imagine you're working on a compiler. You need to represent compile-time computed value of type Maybe Int (e.g. you are precomputing nullable integers).

You see 1 + null. So you have add: Maybe Int -> Maybe Int -> Maybe Int, that takes two precomputed values, and returns new precomputed value for the operation result.

However, you can't precompute Console.readInt().

For some expression, you can either be able to compute value at compile time, or not.

What is the output type of compileTimeCompute: Expr -> ???

ndh2
I don't understand your example. What does compile-time computed stuff have to do with readInt()?

I get that it might be possible to do that, use a Maybe Maybe T. But it's like an optional<bool> in C++. It can be done, it's just not a good idea. So if you design your system not to allow that in the first place, nothing of value was lost.

If you have specific error cases that you want to communicate, like "what was read from the console didn't parse as an int" as opposed to "the computation didn't find a result", then using the two values "Nothing" and "Just Nothing" as the two distinct values to encode that is not a sound design. Either you have meaning, or you have Nothing. Nothing shouldn't have any meaning attached to it.

lostmsu
> "what was read from the console didn't parse as an int"

I meant what you read from the console can not be computed at compile time.

Skeime
Having a situation where one handles both `Nothing` and `Just Nothing` in the same context should be rare.

But you might be writing some functions on some abstract data structure with a type parameter `a` (say it’s a graph and users can tag the vertices with values of type `a`). And (maybe just internally) there are situations in your algorithms where a value might be absent, so you use `Maybe a`. If `Nothing == Just Nothing`, your Users can’t use a maybe type for `a` anymore because your algorithm wouldn’t be able to distinguish between its `Nothing`s and the user’s.

Avi-D-coder
Could the title be changed to include "- Rich Hickey". I know I am certainly more likely to click the link when it's one of his talks.
ndh2
Thanks dang
kgwxd
It did originaly. It is the full title and, without it, it's very vague. Maybe if I had remembered to put [video] in there, the mods wouldn't have touched it. Or maybe not.
oneeyedpigeon
It's pretty meaningless currently ("Maybe Not [video]"). It would be nice to know, at the very least, that it's a video of a talk, but info. about the actual content of the talk would be preferable.
openfuture
Great talk, really excited to see how he is going to solve imports and reification for Clojure [so far only rust and haskell have this figured out].

I'm just so glad that Clojure exists, it is doing all the good design things to make programming pleasant but it has a different approach from the usual suspects which is incredibly valuable!

Being dynamic and interactive is great for UI/UX design since that, especially, benefits from the tight feedback loop and I'm becoming more and more bullish on making the type system separate like this and thinking of it as synonymous with tests, it makes a lot of sense (separation of concerns...)

Also coming from mathematics I can see a analogy to the preference of working in open sets (so that you can always take a point arbitrarily close to the edge and still make an open neighborhood around it) and what he is calling an open system. This preference for open sets over dense ones seems counterintuitive at first sight but is the basis on which analysis is built. Type systems feel a bit 'dense' in the sense that they force you to write programs that are overspecified?

Anyway I just look forward to seeing experience accumulate with these different technologies so that we may learn more about how we ought to design systems !

boogiewoogie
"I got six maybe sheep in my truck."
logistark
I have this strange feeling, that first half-hour was totally unnecesary for what the sencond half is. I mean, that it feels that he have to rant always about. Like ok, this ideas are wrong, this is my idea that is totally new(not really) and is the right thing to do. Nowadays, that ranting about Java is not so cool, he have to rant about types. Only to finish saying that he is going to add to Spec something that have already been implemented in Clojure https://github.com/plumatic/schema. So at the end of the talk, what's the point?
boogiewoogie
Punchline of the talk starts at 38:00
karmakaze
I'm really excited about the way this is progressing. As a way to think of it (and not demote it in any way) is that it's like PropTypes with React done with reusability and less verbosity. The reference to GraphQL really makes it clear that we want deep separated specification.
chubot
Great talk! I’ve watched a lot of Hickey’s talks over the years, and thought he might have "run out of steam" by now. But there are some new insights here that I enjoyed.

One of his main points is that Maybe is context-dependent, and other aspects of schemas are as well. This is a great and underappreciated point.

It’s related to a subtle but important point that Google learned the hard way over many years with respect to protocol buffers. There’s an internal wiki page that says it better, but this argument has spilled out on to the Internet:

https://stackoverflow.com/questions/31801257/why-required-an...

In proto3, they removed the ability to specify whether a field is optional. People say that the reason is “backward compatibility”, which is true, but I think Hickey’s analysis actually gets to the core of the issue.

The issue is that the shape/schema of data is an idea that can be reused across multiple contexts, while optional/required is context-specific. They are two separate things conflated by type systems when you use constructs like Maybe.

When you are threading protocol buffers through long chains of servers written and deployed at different times (and there are many of these core distributed types at Google), then you'll start to appreciate why this is an important issue.

(copy of lobste.rs comment)

anon1253
I really like this idea of composing selectable attributes as schemas (ala RDF). I've been fairly reluctant in adopting specs into my own code, mostly because in a lot of cases I just didn't see the benefit (not for the things I typically do, not saying I wasn't interested in generative testing and the likes ... those still seem very promising but also hard to get right). Especially, when dealing with "distributed" things (like the world wide web, where everything is maybe and only some things useful). And, RDF got this surprisingly right. I was sort-of half expecting him to also introduce PROLOG into it, but maybe next time. The idea of collections of subject-predicate-object statements forming an isomorphism with graphs, and sets of these statements form concrete "aggregates" plays extremely well with the idea of unification through search.

Somewhat tangentially, it also keeps reminding me somehow of Alfred North Whitehead's concepts of reality as process flows https://en.wikipedia.org/wiki/Alfred_North_Whitehead#Whitehe...

"In summary, Whitehead rejects the idea of separate and unchanging bits of matter as the most basic building blocks of reality, in favor of the idea of reality as interrelated events in process. He conceives of reality as composed of processes of dynamic "becoming" rather than static "being", emphasizing that all physical things change and evolve, and that changeless "essences" such as matter are mere abstractions from the interrelated events that are the final real things that make up the world."

ISO-morphism
Here's Rich Hickey giving a talk in 2009 guided by Whitehead quotes [1]

[1] https://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hi...

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.