HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Deconstructing Functional Programming

Gilad Bracha · InfoQ · 116 HN points · 12 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Gilad Bracha's video "Deconstructing Functional Programming".
Watch on InfoQ [↗]
InfoQ Summary
Gilad Bracha explains how to distinguish FP hype from reality and to apply key ideas of FP in non-FP languages, separating the good parts of FP from its unnecessary cultural baggage.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
May 20, 2020 · discreteevent on Welcome to C# 9.0
> these features are grafts from functional programming

Except that Smalltalk -for which the term "object oriented" was invented - had them.

https://www.infoq.com/presentations/functional-pros-cons/

Yeah, this is mostly OO code, and good OO code tends to be not all that dissimilar to good FP code.

You might find Gilad Bracha's talk interesting:

https://www.infoq.com/presentations/functional-pros-cons/

Dec 12, 2019 · 2 points, 0 comments · submitted by mpweiher
How do you deal with state in a language that would prefer to be stateless?

Encapsulation? No one gets direct access to the state. Instead, there are methods or functions for dealing with the state indirectly, crafted to protect those outside.

Answer: wrap the entire external universe and all its messy state up into an object, then pass that down a chain of functions which can return "universe, but with some bytes written to the network" (IO monad)

Sounds like "Outside the Asylum" from the Hitchhiker's Guide to the Galaxy universe. Basically, someone decided the world had gone mad, so he constructed an inside-out house to be an asylum for it.

http://outside-the-asylum.net/

"A monad is a type that wraps an object of another type. There is no direct way to get that ‘inside’ object. Instead you ask the monad to act on it for you." How is a Monad anything different than a particular kind of object wrapper?

https://www.infoq.com/presentations/functional-pros-cons/

james_s_tayler
>How is a Monad anything different than a particular kind of object wrapper?

It _is_ a particular type of wrapper object. That's where the whole "in the category of endofunctors" comes in. An endofunctor being a functor from something to itself.

You have IEnumberable<SomeObject> and that lets you do SelectMany to flatmap some internal nested set of objects down to IEnumerable<SomeOtherObject>.

The shape of what you get back doesn't change. You get back an IEnumerable of something. That has a specific contract on which you can do specific operations regardless of the object it is wrapping.

The other piece of the puzzle is that it is monoidal. A monoid is just a collection of objects + a way of combining them + a 'no-op'. This is usually worded something like "a set of objects with an associative binary operator and an identity".

The classic definition "a monad is just a monoid in the category of endofunctors" is worth picking apart piece by piece. But it's also utterly useless because you have to spend quite awhile picking it apart and then looking at concretions of every one of the individual abstractions to understand what the hell each part individually looks like and then put it back together in your mind.

That definition is classically used as a joke because it's so terse you have no hope of understanding it without a lot of study, yet at the same time it's so precise it's all the information you need!

charmonium
What exactly do you mean by stateless? Encapsulation in the OOP sense is not stateless, because two identical method-calls may not return the same value. Example: if you have an ArrayList, calling `size` at one point in time might have a different result than calling `size` now. That's why I say the list 'remembers' its 'state'.

An object wrapper has to have the object somewhere in memory, you just can't touch it. With a monad, the object might not even exist yet (example: Promise) or there might be more than one (example: Collections).

Quekid5
The person you're replying to is talking about handling mutable state in a "stateless" way. That's a big distinction.
stcredzero
The person you're replying to is talking about handling mutable state in a "stateless" way. That's a big distinction.

How so? Why is it such a big distinction? Why isn't that just encapsulation? "Handling mutable state in a 'stateless' way" is basically just Smalltalk style Object Oriented programming. (As opposed to Java or C++, which has some differences.)

diegoperini
It is encapsulation with the mandatory law stating that an encapsulation of an encapsulation must be as deep as a single encapsulation.

Note that this informal statement doesn't necessarily mean you have to be encapsulating data. A behavior, a contract, an assertion, compositions of all of these etc can also be encapsulated.

Fun ideas (not necessarily true but fun to think about):

* Monads kinda convert nesting to concatenations.

* A monad is the leakiest of the abstractions. You are always a single level of indirection away from the deepest entity encapsulated within.

* What's common among brackets, curly braces and paranthesises(?) is them being monadic if you remove them from their final construction while keeping the commas or semicolons.

Very important note: You should have already stopped reading by now if you got angry to due these false analogies.

atombender
Any references that explain the "converts nesting to concatenation" idea? I find it fascinating, in particular because I write a lot of code that works on deeply nested data structures -- structs of values (which can be structs) or lists of values. The distinction between struct, list and value and the need to treat them differently in code is interesting and annoying, and goes beyond merely working with functors and applicables. I don't understand lenses at all, but I understand monads.
diegoperini
Do ctrl-f for "Nested Operator Expressions" in this piece: https://martinfowler.com/articles/collection-pipeline/

Some other references helped me along the way:

* http://www.lihaoyi.com/post/WhatsFunctionalProgrammingAllAbo...

* http://learnyouahaskell.com/chapters

atombender
Reading a little more about it, I think the concatenation idea more properly fits with the join operator, which acts like a flatten function.
charmonium
A few observations which might also be false.

We can have arbitrarily nested monads:

  monadicObj.bind((T value) => monad.wrap(monad.wrap(value)));
Remember, `bind` only unwraps one layer. Without it unwrapping one layer, programs would continue accumulating large stacks of monads in monads.

I would also point out that it only collapses abstractions of the same kind; Maybe's bind only unwraps one layer of Maybe's. If you have a Promise<Maybe<Foo>> where Foo contains potentially more monads as instance-variables, those don't all get collapsed.

I like the 'converting nesting to concatenating' observation.

Sometimes we do need parentheses though, because most languages are not associative 5 - 2 - 1 is not the same as 5 - (2 - 1). Basically minus does not form a monoid, so the parens matter.

Double_Cast
The value of monads is that they fold side-effects into the return values.

  dirty languages: 

  input -> function_a -> output/input -> function_b -> output 
               ^                             ^
               |                             |
          side_effect_a                 side_effect_b
               |                             |
               v                             v
          lovecraftian_primordial_soup_of_global_state

  pure languages: 

  input -> function_a -> output/input -> function_b -> output 
               ^                             ^
               |                             |
          side_effect_a                 side_effect_b ------>
               |
               +-------------------------------------------->
If C++ were pure, the type-signatures would look like

  (output, side_effect_a) function_a(input);

  (output, side_effect_b) function_b(input);
The drawback is that the type-signature of function_b(function_a()) becomes complex. Now, function_b needs to accept and pass-on the upstream side-effects. To compose function_a and function_b, we need to convert the type-signature of function_b to

  (output, side_effect_b, side_effect_a) function_b(input, side_effect_a); 
Fortunately, ">>=" converts function_b under the hood. Which allows us to write

  function_a() >>= function_b() >>= function_c >>= function_d
and pretend that each ">>=" is just a ";" without wrestling with compound inputs and compound returns.
rhizome
>A monad is a type that wraps an object of another type.

So, the Adapter Pattern, but for types?

Double_Cast
Monadic is a type-class. Like how Equatable is a type-class. Adapters essentially add a specific type to the type-class the client is looking for.
mrkeen
1) Adding getters and setters does not make a program stateless. Your race-conditions and side-effects just take more steps.

2) "A monad is a type that wraps an object of another type. There is no direct way to get that ‘inside’ object. Instead you ask the monad to act on it for you." Your objection to this quote is right, because the quote is wrong.

I have not come across objects in FP

Are there things which effectively encapsulate a value?

https://www.infoq.com/presentations/functional-pros-cons/

> At the end, it's all Category Theory.

Hmm. I thought it was all binary machine instructions. Or was it all NAND gates? Or everything is an object? Or everything is a list? Everything is a file?

Reductionist revelations are fun, and they also have some utility. But we have to be careful, because we tend to latch on to one of them and think that this is the one to rule them all. For different instances of "this" depending on person and time.

In reality it is all much more muddled:

https://www.infoq.com/presentations/functional-pros-cons/

> Whatever we put on top of the math to help us reason

The math is another one of those magic sprinkles you put on top of the NAND gates. If that's the one that gets you going, good for you! And it certainly has its uses. But don't get carried away.

> half-assed essays everyday ... "I learned OO and now I think of everything as objects"

Funny, I don't see any of those these days, maybe you're thinking of the 90s? Instead, I see a ton of blog posts claiming that "everything is category theory". ¯\_(ツ)_/¯. Very often, they then take an interesting problem, claim that it's really this other problem solved elegantly by FP. By stripping away all the properties of the problem that made it interesting and useful. But at least now it typechecks. Sigh.

Anyway...

> Love me some abstraction levels -- as long as they actually abstract things.

Yes. This is important. Really, really important. In my current thinking, there are currently three levels of this: just indirection, compression, and actual abstraction.

A lot of people think they've contributed abstraction when they're really just adding indirection. When you find an abstraction, it really is magic, but those are fairly rare. So you typically want to keep it as direct as possible, while trying to compress (trading off with directness as appropriate). The compression can point you towards an abstraction:

Refactoring Towards Language

https://blog.metaobject.com/2018/11/refactoring-towards-lang...

Another point is that the point of OO is not the program, it is the system created by the program. That is at the same time its greatest strength, and its greatest weakness, because it means that in order to build those systems, the program that constructs the system has to based on "side effects". But that's really a limitation of our call/return architectural pattern:

Thesis: Architecture Oriented Programming is what Object Oriented Programming wanted to be. It talks about instances and systems rather than classes and programs. It also lays a solid foundation for meta programming (connectors). -- https://twitter.com/mpweiher/status/1100283251531411457

So the systems are great (or can be, if you have a great systems builder), but the way we are forced to build them is pretty bad. FP tends to recognise the latter but not the former. See also:

Why Architecture Oriented Programming Matters

https://blog.metaobject.com/2019/02/why-architecture-oriente...

Nov 25, 2017 · 3 points, 0 comments · submitted by mpweiher
Feb 09, 2017 · 2 points, 0 comments · submitted by mpweiher
Deconstructing Functional Programming by Gilad Bracha:

https://www.infoq.com/presentations/functional-pros-cons

hans
this is always a top one for me, he presents so well and kind of nails down the concepts.

but even more so, he sounds like the architect in the matrix having dialogue with some critics.

Aug 27, 2016 · joostdevries on Typelevel Scala
That Elm talk is really interesting. Tx.

When it comes to Monad et al I'm reminded of this talk by Gilad Bracha. He points out that the function names and type names that stem from category theory are not very good at communicating their intent in programming. (my words) Monad could be named Flatmappable f.i. 'unit' and 'bind' can have more intuïtive names as well.

At the moment I get the impression that as Scala is becoming a safe and productive and widely used tool there are developers moving to more Haskell like pastures within Scala. Perhaps because that kind of programming is required for some specific problems. But quite likely as well because they hope that it will make them stand out as smarter programmers.

Some people may remember the so called "Peters principle". Informally it states that people get promoted until they've reached their level of incompetence.

Similarly some smart developers tend to increase the complexity of their code to keep themselves amused until they're struggling to understand it themselves. Joosts principle. :-)

Edit: the link to the talk https://www.infoq.com/presentations/functional-pros-cons

akst
> He points out that the function names and type names that stem from category theory are not very good at communicating their intent in programming

This was my point exactly, also thanks for the link I'll have to checkout that talk :-)

akst
I actually remembering seeing it at some point, this guy is a pretty great presenter, glad I watched it, Thanks again for the link. Now I remember I need to go and check out small talk to see what it's he's going on about.
> and you wouldn't use it in your code

Speak for yourself. I don't see any value in not using a tool when it's available and seems to do the job:

http://science.raphael.poss.name/haskell-for-ocaml-programme...

Personally I don't believe in 100% pure FP, it's both a pain in the ass and doesn't really give you more than 85% FP already would. Actually, most of the FP world seems to agree: out of many Lisps, Scala, OCaml, SML, F#, Erlang/Elixir etc. only Haskell is that hell-bent on purity. Even things like PureScript are more pragmatic than Haskell in this regard.

Here's a great talk on getting what's best from FP without also taking all the problems: http://www.infoq.com/presentations/functional-pros-cons

tome
> Even things like PureScript are more pragmatic than Haskell in this regard

Interesting. In what way?

TazeTSchnitzel
> Personally I don't believe in 100% pure FP, it's both a pain in the ass and doesn't really give you more than 85% FP already would.

How so? Having a pure language has some significant benefits, like laziness. Data structures as control flow is awesome.

Apr 01, 2015 · 1 points, 0 comments · submitted by mpweiher
I think most monad tutorials would be quite ok if they didn't use words like "monad", "bind", "return" and so on. It's just stupid to insist on using names which mean nothing or when they mean something (return) it's actually opposite of what they're supposed to do. And no, there are no upsides to this: you math-inclined people may think so because you were trained to think that way, but it's simply not true.

I suspected this for a long time, but recently I saw a talk from Gilad Bracha - http://www.infoq.com/presentations/functional-pros-cons - which convinced me that I was right. Scroll to ~22 minutes into the talk for relevant section.

Ok, I finally said it. I feel better now, thanks ;)

saryant
I don't know if it was intentional, but most of the intro literature around Scala seems to avoid terms like monad, just introducing the reader to Option, Future, etc., and letting the reader notice the common patterns (map, flatMap).

Eventually, the reader learns about monads without ever explicitly learning about monads.

dllthomas
Watched the video. It was interesting. Gilad Bracha certainly knows his stuff. However, while he knows his stuff, his grasp of the stuff he's criticizing is less thorough.

He does seem to have a reasonable grasp of the what, but is shaky enough on the why that he's presented a few straw men.

Regarding Hindly-Milner type inference, I believe that there have ever been advocates that have gone from "you don't need to specify a type anywhere" to "you shouldn't specify a type anywhere". That is not how practitioners use these languages, these days. That you need redundancy for error correction (an accurate and important point) does not mean that your particular level of redundancy is best. Inference allows you to specify types where they will make things clearer, and lets the compiler fill in the gaps. It also lets you ask questions of your interpreter about what shape of thing will fit in a given hole. Even before GHC added TypedHoles, you could open GHCi and say :t (\ x -> some expression involving x) and get back the type of x as the first argument.

There's not enough room in this margin to fit the rest...

ky3
Yes, 'return' is egregious. I believe it was co-designed with haskell's do-notation. So that you could write purely declarative code that was a spitting image of the imperative equivalent.

E.g.

    x <- foo 2      -- x = foo(2)
    y <- bar 4      -- y = bar(4)
    return (x+y)    -- return (x+y)
But something that's always missing in such discussions is that haskell allows you to define

    a_better_word = return
and then you can freely forget about 'return' in your own code, except when you roll your own monad instances, which is rarer than a blue moon.
dllthomas
With AMP, it should always be possible to use pure in place of return.

https://www.haskell.org/haskellwiki/Functor-Applicative-Mona...

dllthomas
While I generally disagree with you here, I actually do agree about "return". Hopefully we can deprecate it in favor of pure now that all Monads must be Applicatives.

My broader disagreement is that these names do mean something - they mean the same thing they do in math. Using the same name makes it easier for me to find more relevant things, and it can be tremendously useful to be able pull over results and intuitions from other contexts. There's no good reason to demand an additional translation step when we're talking about the same things.

bad_user
I believe Monad is fine as a term, because you cannot describe it based on resemblance with real-world objects, as more conventional OOP design-patterns are doing. And I think that the name of a design pattern should be first of all unique and easy to remember and not necessarily descriptive, because the whole point of coming up with a design pattern is to grow the vocabulary of those using it and naming clashes are bad.

I do prefer "flatMap" to "bind", as practiced in say Scala, because it's a direct reference to this property: "m.flatMap(f) == m.map(f).flatten"

dllthomas
I don't know enough Scala... "flatten" seems to be equivalent to "join" from Haskell/math - is "flatten" specialized to lists? For that matter, is "map"?
bad_user
What do you mean by specialized?

Don't know the equivalent of flatten in Haskell, but if you want a signature then it is something like:

   def flatten(m: M[M[A]]): M[A]
Or another way of thinking about it ...

   flatten(m) == flatMap(m)(x => x)
So basically, in order to define a monad, implementing flatMap is equivalent to implementing flatten. And I think that "flatten" as a verb is very intuitive, although intuitiveness is subjective since it's tightly related to familiarity - but flatten has been used in many languages as an operation available for collections and it works for monads in general just as well.
dllthomas
Specialized in the sense of a more restricted type. From your answer, it seems like "no" - though my Scala is superficial and rusty.

The Haskell equivalent is join :: Monad m => m (m a) -> m a

We also have the equivalent specialized to lists: concat :: [[a]] -> [a]

And as you say, join = bind id = (>>= id).

Incidentally, in math that function is the "eta" in the triple representing a monad.

abathologist
I find that talk really painful. I do not recommend it. It is mean spirited, full of misleading characterizations, and - I think - it misses the core point of functional programming. Gary Verhaegen's comment "this was a waste of time" addresses the many problems effectively. Brian Craft's "ugh" addresses others. Either of those comments are more worthwhile than the video itself.
klibertp
> I find that talk really painful.

I suspect it may be painful in some cases, it's somewhat like a clash of different cultures. The guy is firmly rooted in Smalltalk and Lisp schools of thinking and everything he says is rather obvious for those who share the same background. On the other hand it may be obviously false for people following ML tradition and even more so for recent converts.

Watching this talk calmed me down. It turns out that I'm not an idiot, that I know how to do FP (because I quite like it!) and that there's no need to change my ways just because some vocal minority proclaims me a heretic for preferring meaningful names or no-nonsense IO handling. On the whole I think it was an hour well-spent. YMMV ofc.

abathologist
That makes sense. Your points are well taken. Also, in retrospect, I realize that my reply wasn't helpful for the discussion at hand.
None
None
dragonwriter
None of those names "mean nothing"; they all have very specific meanings. Its true that in the case of return, the meaning in the context of Haskell monads is somewhat special, but since its part of the actual syntax of Haskell do-notation, its particularly important to address it in Haskell-oriented monad tutorials, whether or not it was the best choice from a syntax design perspective.
klibertp
No. You desperately insist on giving them meaning. And sometimes, which is worse, the meaning you want to give them is completely unrelated or even opposed to everything they are or do.

Aside: You realize that the word "monad" (and "dyad") already has a very specific meaning and that you assaulted that meaning and now are trying to beat the poor word into submission? If I had to choose I'd say that J programmers have much more of a right to use and define this word: a single-argument function ("mono") makes more sense to be called a "monad" than an instance of "flat-mappable" interface.

> since its part of the actual syntax of Haskell do-notation

It's not, which makes using it even dumber - as it's no problem at all to change it.

F# uses monads extensively, as one of the selling points of the language, yet the m-word rarely if ever is mentioned in the docs. That's because designers of that language were rational enough to realize that "computation expression" is a name that both conveys some basic intuition about the thing in question and is concise enough to allow talking about the thing abstractly. From my perspective I don't see any, at all, arguments for persistent use of the M-word by the community of some programming language. Other than trying to be different, or something.

Believe it or not, names do matter. Good name speeds up learning/understanding considerably; a bad name slows it down. It's sometimes - rarely! - worth it to invent completely new name. That's when the thing you want to name really is dissimilar to anything else and when re-using some name would introduce false intuitions about the object. But monads ARE NOT SUCH THINGS (sorry for shouting). "You could have invented monads", and quite possibly you did a few times already, and that probably wouldn't be the case if the idea was that ground-breaking, that unprecedented.

Ok, nevermind - I'm getting angry for no reason, I should stop it already.

dragonwriter
> You desperately insist on giving them meaning.

They have meaning in the same sense that any words have meaning -- that is, they communicate meaning between an existing group of people.

> And sometimes, which is worse, the meaning you want to give them is completely unrelated or even opposed to everything they are or do.

Like many words, their meaning in a specific technical context differs from their meaning in other contexts. This is not particularly unusual.

> Aside: You realize that the word "monad" (and "dyad") already has a very specific meaning

"Monad" has several distinct well-established meanings in different domains besides its use is Category Theory (and thus Haskell). So what?

> and that you assaulted that meaning and now are trying to beat the poor word into submission?

The meaning of "monad" in Haskell comes directly from its meaning in Category Theory, which is inspired by (though not the same as) its earlier use in mathematics stretching back to its use in metaphysics (particularly, through Leibnitz's Monadology), which has its roots in its use in Classical philosophy, from whence also comes all its other uses.

> If I had to choose I'd say that J programmers have much more of a right to use and define this word

Well, you don't have to or even get to choose that; words can and do have uses in different contexts and you don't get to choose one and make it the exclusive use of the word because you like it more.

> a single-argument function ("mono") makes more sense to be called a "monad" than an instance of "flat-mappable" interface.

Not really. Sure, a single argument function might have a better claim to the title "mono-argument function" but "-ad" isn't generally a suffix that means "-argument function".

klibertp
Ok, so I was rather rude unnecessarily, sorry about that. I agree with what you wrote about words:

> words can and do have uses in different contexts and you don't get to choose one and make it the exclusive use of the word because you like it more.

it's just that I think the same applies to concepts. You shouldn't be able to "own" the concept and to insist that it should be called as you want in every context. I see comments and posts about how "X or Y is just a monad!" to be exactly this: people trying to "own" a concept.

Just as both musicians, mathematicians and programmers can all use a single word for different things, different programmers should be able to use the same concept without using the same word for it.

In short (directed to random commenters on the Internet, no to you personally): stop correcting me when I say "computation expression builder", don't say that what I "really mean" is a monad. It's not, what I "really mean" is an abstract concept and we both know its properties, and you have no right to oppress me because I chose different word for it.

chriswarbo
You may already be aware of this, but for those who aren't: `return` isn't actually part of do-notation syntax, it's a function name. It's true that do-notation must result in a monadic value, but that's just because of the type. `return` can be used to construct that monadic value, but there are many other ways to do so:

    func1 :: Maybe Int
    func1 = do x <- foo
               y <- bar
               return (x + y)  -- Using return

    func2 :: Maybe Int
    func2 = do x <- foo
               y <- bar
               Just (x + y)  -- Building a monadic value explicitly

    func3 :: Maybe Int
    func3 = do x <- foo
               y <- bar
               plusJust x y  -- Using some other function

    plusJust :: Int -> Int -> Maybe Int
    plusJust x y = Just (x + y)
Also, since `return` is a function, we can use it outside do-notation:

    wrapAndApply :: a -> (a -> b) -> [b]
    wrapAndApply x f = fmap f (return x)
What makes `return` "special" is that it's a method of the `Monad` typeclass. In other words, the name `return` is overloaded to work with any instance of `Monad` (`IO`, `Maybe`, `List`, etc.), depending on the type that's required of it.

Loads of other functions are overloaded like this, eg. `+` has implementations for `Int`, `Float`, etc. so it's not monad-specific.

The part which is monad-specific is that Haskell's do-notation is hard-coded to the built-in Monad typeclass. We're completely free to make our own monad implementation, separate to Haskell's built-in one, but we won't be able to use do-notation with them unless we implement the built-in `Monad` typeclass as well. That could be as simple as mapping one name to the other:

    instance MyMonad a => Monad a
        return = myReturn
        (>>=)  = myBind
dllthomas
"The part which is monad-specific is that Haskell's do-notation is hard-coded to the built-in Monad typeclass."

That's not quite true, at least in GHC. It's hard-coded to use (>>=) and fail, but the only thing that prevents you from defining your own is the name collision. With -XNoImplicitPrelude, you can (and they can even be locally scoped - I did some cute things with this once that I'd never want to see in production code...).

bunderbunder
See here: http://fsharpforfunandprofit.com/posts/recipe-part2/

I suspect that one could get through the entire thing, handily grasp all of it, and be ready to apply what was learned in practice without ever realizing this had anything to do with the dreaded M-word.

On a somewhat related note, I'm impressed by how Microsoft has their entire developer community comfortably using a few basic monads without even realizing it. I've even had some success with hijacking it. The terms they chose are explicitly coupled to the idea of querying, but I've been able to produce some implementations of .NET's version of the pattern that still produce good and readable results when you use the LINQ query syntax with them. I wouldn't say it's a perfect renaming of all the concepts, but if the goal is to make the concept intuitive enough for most users to tackle without fears then damn if it isn't an impressive 90% solution.

Aug 26, 2014 · 2 points, 0 comments · submitted by tosh
Jan 21, 2014 · auvrw on Dots and Perl
just b/c it's not a language feature doesn't mean that you can't make your own sort of thing. for instance, the play framework has a TODO result object that is indeed quite handy for stubbing out api endpoints and the like.

like the typed/untyped debate, the amount of primitives a language ought to have seems to be a topic of disagreement even among experts (touched on here http://www.infoq.com/presentations/functional-pros-cons although not the main topic). also, like type systems, primitives seem like something that, while important to consider, are easy to get hung up on, to take a stance and apply it to all languages without the consideration that different languages are good for different types of problems. my own snap judgements of perl were that its notions of context and pragmas make it easy to express one or two ideas both quickly and intuitively, but that its type system makes it difficult to reason about larger programs.

so to cap off this ramble, my question to the perl people: what problems are you solving with perl? to me, perl seems to mainly fit two use cases: sys-admin stuff (where it's basically used like bash++, although i realize it's much more powerful than that) and an embedded language for c programs.

Mithaldu
> my question to the perl people: what problems are you solving with perl?

Any problem, there are even 3d opengl games implemented in Perl.

lsiebert
Perl is used heavily in biology and bio informatics. It's also really really good for messing with text in complicated ways. And it's still in use as a web language. So that's three more use cases.

I think you can think of it this way. ANSI C doesn't have a built in linked list, or hash/dictionary, or line count when reading a file, or map or grep or a bunch of things... and you can implement them (though map/grep are a bit tricky) but you have to worry about the implementation, and matching other people's implementation. By having the many useful primitives Perl does, it provides a broader standard interface for doing things, and that makes hacking easier.

Perl tries to give you lots of tools to express yourself. If you are a wood carver, you know that you can work wood with very limited tools, but having the right tool in your shop makes things a lot easier and simpler for the expert. So it is with Perl.

auvrw
yeah, i'm aware that perl is used by biologists, but the question is more like, "should perl be used by biologists?" no question that c is not a great choice, but what about python? or, hell, javascript, even? for the kind of programs biologists would want to write (i.e. programs that need to describe a system that performs several tasks rather than programs that need to perform one or two specific tasks), it'd be helpful to have a language that not only has data structures like arrays/lists and hashes/dictionaries but also a data model that includes things like classes as a primitive. python in particular has numpy/scipy.

perl's syntax for regexp matching definitely makes it a good choice for programs that have to do a lot of string-munging, but that's what i meant by "sys-admin-type stuff."

Mithaldu
In Perl you can do what numpy/scipy do with PDL, the Perl Data Language.

Perl is not just a string mangling language, it's a full-fledged generic programming language.

lsiebert
Classes are not primitives in either language... primitives contain a value instead of a reference. I think you mean that classes are first order, but I don't want to put words in your mouth so perhaps you can clarify what the difference you are pointing to.

I think, given that Perl is being used, the question might be, what features make javascript or Python a good reason to switch? I'd want to avoid losing features.

Perl is stable. Perl written two decades or more ago still runs fine in modern perl. That's important for data analysis, especially in the sciences.

I'll have to read up on Python ... I'd want a tie equivalent to be able to treat large files as if they were arrays/hashs (lists/dicts) without loading them into memory as such.

auvrw
> I think you mean that classes are first order, but I don't want to put words in your mouth

please do. i'm a language user, not a designer.

> what features make javascript or Python a good reason to switch?

or, for a mostly-disinterested (in programming, i mean), entrenched group of users like biologists, this question becomes more like, "what features would necessitate a switch?" same question for plan 9 vs. linux or any technology that's trying to do a better job of filling a need that's already being met. ...and, of course, the answer is that there are none. perl has notions of object-oriented-ness. it's possible to build in your own type-checking. the backwards compatibility thing is definitely helpful for the sciences, and it's free, so, like the fortran routines that parts of numpy ultimately wrap, their code can live on.

i mentioned python and javascript not because i think they're such great languages but because i feel that they're in that position relative to perl. of course, unlike plan 9, python and javascript got a lot of users.

> treat large files as if they were arrays/hashs (lists/dicts) without loading them into memory as such.

not totally sure what you mean here. isn't that what databases are for?

anyway, for a new code base that's going to evolve a lot and possibly get scrapped entirely (i.e. most hn readers' situation), python is worth checking out.

Disagree about the idea that those who are unfamiliar with type theory prefer dynamic typing.

Typing preferences are usually due to trends in language usage having little to do with knowledge.

Plenty of java programmers use static typing without ever having to understand type theory.

But looking to history of language designers/implementers

Dan Friedman

Gilad Bracha http://www.infoq.com/presentations/functional-pros-cons

Guy Steele

Rich Hickey

All of these guys have worked on static languages, have a keener understanding of type theory than most, and yet they seem to promote dynamic languages at least when it comes to their pet languages.

coolsunglasses
Gilad Bracha has no idea what the hell he's talking about.

Rich hasn't worked on static languages and I'm not familiar with him having done anything in type theory. He wanted a nicer, practical Lisp first and foremost. A helpful compiler wasn't high on his list of priorities.

Guy Steele's most recent work has involved functional, statically typed programming languages: http://en.wikipedia.org/wiki/Fortress_(programming_language)

One of Friedman's most recent books http://www.ccs.neu.edu/home/matthias/BTML/ was on ML which is a statically typed, functional programming language.

The smart people that weren't using static types back in the 70s and 80s weren't using them because the statically typed languages available back then were fuckin' awful except for ML and Miranda.

We can do a lot better as programmers these days. Stop giving yourself an excuse to not learn new things.

kd0amg
FWIW, I don't think the diagram is claiming that people who are unfamiliar with type theory tend to prefer dynamic typing, rather that people who prefer dynamic typing tend to be unfamiliar with type theory.
rtfeldman
The most useful takeaway from that graph is the insight into how many type theory aficionados look at the world.
the_af
I'm not disagreeing with you (that a lot of people go with what's trendy or what they already know), but I wouldn't put Gilad Bracha in the list of knowledgeable people. I've seen the talk you are linking to and it's not very impressive... he sounds mostly whiny to me. In his own blog, when he writes about about functional programming or type theory, he gets called out by the people who really know about it.
codygman
I would agree, I don't see why Gilad Bracha is on that list.
Dec 20, 2013 · 106 points, 109 comments · submitted by newgame
mafribe
I found Bracha's talk poor. That guy really has a chip on his shoulder vis-a-vis functional programming. A lot of things he said were not well though out. Here are some examples.

- He claimed that tail recursion could be seen as the essence of functional programming. How so?

- He complained that tail recursion has problems with debugging. Well, tail recursion throws away stack information, so it should not be a surprise. You don't get better debug information in while loops either. And you can use a 'debug' flag to get the compiler to retain the debug information (at the cost of slower execution).

- His remarks about Hindley-Milner being bad are bizarre. Exactly what is his argument?

- His claims about pattern-matching are equally poor. Yes, pattern matching does some dynamic checks, and in some sense are similar to reflection. But the types constrain what you can do, removing large classes of error possibilities. Moreover, typing of patterns can give you compile-time exhaustiveness checks. Pattern matching has various other advantages, such as locally scoped names for subcomponents of the thing you are matching against, and compile-time optimisation of matching strategies.

- He also repeatedly made fun of Milner's "well-typed programs do not go wrong", implying that Milner's statement is obviously non-sense. Had he studied Milner's "A Theory of Type Polymorphism in Programming" where the statement originated, Bracha would have learned that Milner uses a particular understanding of going wrong which does not mean complete absence of any errors whatsoever. Milner uses a peculiar meaning, and in Milner's sense, well-typed programs do indeed not go wrong.

- He also criticises patterns for not being first-class citizens. Of course first-class patterns are nice, and some languages have them, but there are performance implications of having them.

- His critique of monads was focussed on something superficial, how they are named in Haskell. But the interesting question is: are monads a good abstraction to provide in a programming language? Most languages provide special cases: C has the state monad, Java has the state and exception monad etc. There are good reasons for that.

- And yes, normal programmers could have invented monads. But they didn't. Maybe there's a message in this failure?

qu1j0t3
Recursion and iteration are equivalent, so it's not much of a stretch to call it a core concept (for more, see SICP & the lambda calculus). It is the only form of iteration available in functional programming.

Monads are not only a good abstraction, they are essential* if we are to move away from haphazard construction. Normal programmers have invented them many times; as the truism goes, you have have even "invented" them yourself.

Remember when function pointers seemed tricky and unnecessary? Remember when closures seemed tricky and unnecessary? Yeah. One day you're going to see monads in the same way.

* - Functional programming => programming with pure functions.

the_af
Indeed, I found his talk pretty poor as well. A lot of it comes down to not wanting to learn new terminology, and forgetting that a lot of "common sense" terminology from, say, Java, is also learned. I don't get more insight from "FlatMappable" than from "Monad"; in both cases I must learn about them first, and neither is intuitive without prior knowledge.

It is instructive to read Bracha's blog too, mostly for the comments where readers refute a lot of what he claims.

His argument against Hindley-Milner seems to be that "he hates it", and that type errors are sometimes hard to understand. It is true IMO that they are hard to understand (even though, like everything in programming, you get better with practice), but what is the alternative? Debugging runtime errors while on production?

He also presents Scala as a successful marriage between OOP and FP, but in reality this is a controversial issue. Some of the resistance to Scala (witnessed here in Hacker News, for example) is due to it trying to be a jack of all trades and master of none. Scala's syntax is arguably _harder to read_ than that of other FP languages.

Some of his "funny" remarks sounded mean-spirited to me. Nobody in his right mind claims that FP invented map or reduce, for example.

The only point of his talk I somewhat agree with is that language evangelists are annoying. Oh, and that "return" is poorly named.

newgame
> His argument against Hindley-Milner seems to be that "he hates it", and that type errors are sometimes hard to understand. It is true IMO that they are hard to understand (even though, like everything in programming, you get better with practice), but what is the alternative? Debugging runtime errors while on production?

He pointed out that a more nominal type system is a solution. Because when you give meaningful names to your types the error messages will become clearer and not full of long, inferred types that reveal potentially confusing or unimportant implementation details.

mafribe
Most programming languages with Damas-Hindley-Milner do not prevent you from using explicit type annotation, and inventing semantically meaningful type names.

More importantly, I think the reason why error messages are sparse and not meaningful in languages with Damas-Hindley-Milner is that nobody bothered to improve the situation. And the reason why nobody botheres is that it's simply not a problem in practise. Any even moderately experienced programmer can easily detect and fix typing errors as they are given in Haskell, Ocaml, F#, Scala etc.

None
None
taeric
First, thanks for all involved in getting this posted!

I'm somewhat curious on why the industry has such an aversion to simulating things in our mind. Especially when this seems to be one of the arguments employed against monads in this speech. That it basically couches something known in an odd name that is not known. Isn't this just stating that it is bad because it confuses the simulator that is the reader?

That said, the live coding aspect is something that I am just now learning from lisp with emacs. Being able to evaluate a function inline is rather nice. It is somewhat sad, as I still wish I could get a better vote in for literate programming. (Betraying my appeal to the human factor moreso than the mechanical one.)

catnaroek
Monads have nothing to do with simulating anything. They are just a commonly recurring pattern of computational contexts (more precisely, functors) that also provide two basic operations:

1. entering the context (pure :: a -> m a) 2. collapsing nested contexts into one (join :: m (m a) -> m a)

Together with some coherence laws that ensure that these operations do exactly, no more or less, than entering the context and collapsing nested instances of it.

taeric
Did you watch the video? I'm not referring to monads simulating something. I'm referring to the observation that when reading code you are simulating its execution. My understanding of the video's complaint against monads is that the signature of monads is actually quite simple and well understood in different contexts by different names.

The video goes on to display an environment where you do not have to simulate the code in your head.

This progression seems somewhat interesting to me. As does the desire to not have to simulate code in your head.

asdasf
>This progression seems somewhat interesting to me. As does the desire to not have to simulate code in your head.

But none of that has anything to do with monads.

taeric
Ok... I think I'm getting trolled at this point.

I am taking issue with the video's critique of monads. Wherein it is claimed that monads manage to take a common and understandable behavior and make it laughably impossible to explain to people by giving it a weird name. Essentially, the problem with monads is one of it being difficult to "simulate" under the name "monad" for many individuals.

This part, I actually feel makes sense and resonates well. Simply follow the progression in the video and see how "FlatMappable" becomes less and less intuitive as it is given worse and worse names.

The part that is interesting to me, is how this then progresses into a point on how programmers should not have to simulate the code in their head. Now, I realize there is a big difference between "should not have to" and "is difficult to intuitively do so". Still seems an odd progression, though.

asdasf
>Ok... I think I'm getting trolled at this point.

If you don't want to discuss something, then don't post. You are not making any sense, and calling people trolls does not help at all.

taeric
I should have put a smiley on that, then. While feeling trolled, I highly suspect this is just a rather amusing case of poor communication.

At no point was I trying to describe or discuss monads. That is something a response to me thought I was trying to do. When referring to "simulating" a system, I was referring to where the video refers to the process of reading "dead code" in a text editor. There is a large rant on monads in the video where the argument appears to be that the problem is strictly with the name. The reason given that it takes something understood, and hides it behind non-obvious names. I extrapolated this to be that it makes the program and the idea "hard to simulate" for the coder reading the code.

bunderbunder
Great talk. Particularly the bit on the value of naming things - I rather wish he'd flogged that a bit harder.

As time goes on I'm finding it more and more frustrating to try and maintain code that relies entirely on anonymous and structural constructs without any nominal component. Yes, I do feel super-powerful when I can bang out a bunch of code really quickly by just stacking a bunch of more-or-less purely mathematical constructs on top of each other. . . but as the story of the Mars Climate Orbiter should teach us rather poignantly, when you're trying to engineer larger, more complex systems it turns out that meta-information is actually really useful stuff.

the_af
I'd say static typing and purity as advocated by FP are some of the tools one wants when trying to engineer larger, more complex systems.

I wasn't familiar with the Mars Climate Orbiter case, but a cursory reading suggests one of the causes was a type error (confusing newtons with pound-force).

bunderbunder
As advocated widely in the FP blogosphere. . . not necessarily as commonly practiced in FP programming culture, or supported by many FP languages.

For example, I strongly prefer F# to its cousin OCaml largely because F# uses nominal typing and OCaml uses structural typing. I've also got some misgivings about being overly reliant on type inference. Both structural typing and advanced type inference are admittedly incredibly convenient. What worries me is that they also seem to be incredibly convenient as ways to obfuscate the programmer's intent w/r/t types and their semantics.

the_af
I'd say not so much as advocated by the blogosphere (which can be annoying, as fans of almost anything often are), but by the people actually designing and using FP languages.

In any case, there is certainly valid criticism of FP, but Bracha's just isn't it. My impression is that the guy -- as clever as he may be in other areas -- barely understands FP, and makes disparaging remarks about things he isn't familiar with. Read his blog; every assertion he makes is shown to be incorrect or misleading by people who do understand FP, like Tony Morris or (very politely) Philip Wadler himself.

vitd
I'm just learning functional programming with Haskell, and it was great to hear him explain that learning Haskell is really hard because of the terminology. I feel a little (just a little) less stupid.

That said, he's a terrible presenter. His smarmy style was really off-putting, and his motives a little sketchy. He spends a good portion of the talk slamming just about every language in existence except for the two he works on (Dart and Newspeak). It seemed very disingenuous and I don't need another ranting nerd spouting venom about why something's not very good in that holier-than-thou tone. I would have rather had a straightforward talk showing the strengths and weaknesses than the bitter tone this had.

agentultra
This is a brilliant talk. It's getting far too easy to annoy the FP cult(ure).

As an aside, Scala is not unique in marrying a FP approach with an OO system. CL has had CLOS, IMO one of the better implementations of "OO" outside of Smalltalk, for much longer than Scala.

Definitely watch this!

asdasf
CLOS and scala have very little in common, both in the functional side and the OO side. Ocaml and F# are better examples. Can I ask what you think made this a brilliant talk? It seemed like the standard "I don't want to have to learn so I will pretend there's no reason to learn" nonsense we hear all the time.
agentultra
wrt. CLOS/Scala, indeed very little in common and I didn't intend to suggest they were similar. In recent articles that mention this idea Scala is often mentioned in the same breath as if it has exclusive domain over it. I simply meant to debunk that claim if it exists.

I thought it was brilliant because Gilad provides a humble deconstruction of common myths and claims of the FP culture. He is skeptical and I didn't find any of his conclusions to be dismissive: he walks through the reasoning behind his opinions. I certainly didn't find any point where I thought he was ignorant of the subject of which he was speaking. And if you listen to his opening remarks about "deconstruction," and his conclusion do note that he points out some FP concepts that are useful and should be exploited more. He was there to break through the hype and I think he was successful.

asdasf
>Gilad provides a humble deconstruction of common myths and claims of the FP culture

He argued with a joke from a comic and lost. Even he would laugh in your face at the notion that there was anything humble about his talk.

>He is skeptical and I didn't find any of his conclusions to be dismissive

That is precisely the opposite of reality. He doesn't even understand functional programming, he is thus not skeptical, he is dismissive.

>He was there to break through the hype and I think he was successful.

The fact that both he and you believe there is "hype" is indicative of the problem. "Hey, you should learn things and improve your skills" is not hype.

Most of what he says is outright wrong. He talks about smalltalk inventing all of this FP stuff that was in ML before smalltalk-76 "invented" them. He pretends smalltalk predates FP, except again, ML predates smalltalk-76. and smalltalk-72 didn't have the stuff he is talking about. He talks about things "FP languages can't do", but that I do all the time in haskell with no issues. He repeats the oldest most worn out fallacious arguments that have been debunked over and over, and pretends that since nobody is allowed to interrupt the talk to correct him, his arguments are correct. Everything about his talk is an example of the exact opposite of what you suggest it is. If you want someone to convincingly lie to you about how FP isn't all that, look to Erik Meijer. Gilad sucks at it.

catnaroek
Scala and Common Lisp are not particularly functional languages. Functional programming in Scala is doable, although it takes a nontrivial amount of effort (see: scalaz), and it is outright impractical in Common Lisp.

As an aside, CLOS multimethods resemble Haskell's multiparameter type classes (except CLOS is dumber: you cannot provide any guarantee that the same types will provide two or more common operations) more than they resemble anything else also called "object-oriented".

Peaker
Multimethods are not quite as powerful as type-classes. Type-classes can dispatch on any part of the type signature, whether it is an argument, result type, parameter to a type, etc.
catnaroek
Agreed there. But give me a little break, I only said "resemble", not "are the same as". :-)
agentultra
It is a common mistake I've heard from many CL newbies that believe CL is a "FP" language.

The best descriptor I can find to date (of CL) is, "programmable programming language," which allows it to encompass almost every desired feature one may need; including many that fall under the FP umbrella which may be where the confusion stems from.

However one of the opening points of the talk was that, "FP," is not a rigorously defined term and is subject to interpretation. Which leads to bikeshedding over language features and a lot of hype.

I believe it also leads to a lot of misplaced faith in the purity and completeness of mathematics (it's almost as if the popular notion of FP is being reborn as a modern Principia Mathematica).

CL obviously cannot be called an, "FP," language since its inception seems to predate the popular notion of the term. Scala may suffer in the same way due to its reliance on the JVM and the expression semantics it has carried over from Java. However many of the features one tends to associate with modern FP languages (though not all) are present in both languages.

As for your aside, how so? Perhaps a discussion we can have over email if you're interested. You sound smart. However I don't understand your statement and would like to know more.

catnaroek
> As for your aside, how so?

CLOS multimethods do not "belong" to an object or even to a class declaration. Particular implementations of generic methods are declared globally, just like Haskell type class instances. Although, as Peaker noted, type classes can dispatch on any part of the type signature. It is impossible to make a CLOS multimethod with signature:

    (SomeClass a b) => String -> (a, b)
> Perhaps a discussion we can have over email if you're interested.

Sorry, I never check email. But I am almost always on Freenode. My nick is pyon.

> You sound smart.

Not really. The regulars in #haskell - now they are frigging smart.

agentultra
I think the comparison to type classes is specious and ends there. They look similar but they tackle very different problems. You've actually explained why rather well.

> Not really. The regulars in #haskell - now they are frigging smart.

Don't sell yourself short.

Peaker
It seems to me type classes have a superset of the features of CL multimethods. Why not compare them?
jstratr
Interesting talk! Bracha has some good arguments against features that I generally enjoy in programming languages, like Damas–Hindley–Milner type inference and pattern matching.

Regarding Haskell: The points he makes against obtuse names based in category theory are valid, but then again, Haskell has its roots in research programming languages. Math-based terminology makes more sense for an academic audience.

asdasf
>The points he makes against obtuse names based in category theory are valid

No, they aren't. When you have a class of "things" that doesn't have a name most people are familiar with, you are left with two options. Either choose a name people are familiar with, but which is wrong and misleading. Or choose the correct name and people have to learn a name. Are we seriously so pathetic as an industry that learning 3 new technical terms is a problem?

catnaroek
> Are we seriously so pathetic as an industry that learning 3 new technical terms is a problem?

We are even more pathetic than that. If the underlying concepts are misleading but evoke a warm and fuzzy sense of familiarity (objects), we will accept them wholeheartedly. If the underlying concepts are mathematical, we will reject them as disconnected with our everyday needs.

gohrt
Most forum debates about computer science can be replaced by pointers to Edgar Dijkstra's writings.

http://en.wikipedia.org/wiki/On_the_Cruelty_of_Really_Teachi...

http://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/EW...

example:

"""My next linguistical suggestion is more rigorous. It is to fight the "if-this-guy-wants-to-talk-to-that-guy" syndrome: never refer to parts of programs or pieces of equipment in an anthropomorphic terminology, nor allow your students to do so. ..

I have now encountered programs wanting things, knowing things, expecting things, believing things, etc., and each time that gave rise to avoidable confusions."""

jstratr
>Are we seriously so pathetic as an industry that learning 3 new technical terms is a problem?

For most people, yeah, I think monads are a big hurdle. They look intimidating to outsiders.

I still have to admit that I like the approach Haskell has taken. Sure, it's harder to grasp the concepts if you don't have a background in math, but it's not like monads, monoids, arrows, and functors were thrown in there just to be pretentious. There's a whole lot of useful theory surrounding those concepts that can be used to the programmer's advantage.

sethev
Part of his critique is that they do use terms that people are familiar with but which are misleading, like "return".
Peaker
This is a very valid critique, but I don't remember ever hearing that (except from Haskellers!)
mafribe
It's a fairly superficial matter, not worthy of a lengthy diatribe. One gets used to names.

After all compilers don't compile, they translate.

catnaroek
So call it "pure". I agree that "return" is not the most fortunate term.
thinkpad20
To an extent, I think it's a valid criticism. There are two main problems with the mathy names that many concepts in Haskell have.

The first is that they hide the meaning. For example, "Monoid" is a really scary term, and explaining it further as "something with an identity and an associative operation" really doesn't help much either. Calling it instead "Addable" or "Joinable", and explaining it instead as "things with a default 'zero' version, and which have a way to add two of them together", while perhaps not a perfect definition, would be much more intuitive for the majority of people.

That brings me to the second problem I see, which is that the esoteric terminology in Haskell creates a barrier between those who understand it, and those who don't, and contribute to a sense of Haskell culture being exclusionary and cult-like, which discourages cross-talk.

Criticizing Hindley-Milner, on the other hand, I'm confused by. It's such a useful and powerful system. I suppose it can make compiler errors more obscure at times, but you get used to reading them and they aren't so bad. Hindley-Milner isn't just a type inferrence system; it's a typing system which allows for the most general typing to always be used, so that the functions one writes are as general as possible, encouraging modularity and code reuse.

charlieflowers
I hate to say this sort of, because it is going to sound snobby, and I do not believe in being snobby. But the undeniable truth is, there's a fraction of the world's population of programmers who simply do not have the particular mental traits that would allow them to ever be completely comfortable and confident with a concept as abstract as monad.

Some people will call me Satan now, and others will jump on what I said and say, "Hell yeah, the world is full of dumb blub programmers." But both those groups are misunderstanding me.

I think a programmer who cannot understand this level of abstraction can still do plenty of valuable things as a programmer. I would not call them dumb. They may be -- and many are -- fabulously creative, driven, capable and highly productive.

There's a certain type of programmer who is more comfortable with abstraction and whose brain is more wired to deal with these amorphous, unnamed concepts. The same kind of brain wiring is needed to go far with mathematics.

But as FP becomes more prominent, this is going to become a dividing issue. Some will not make the transition, or will do so only partially. I think it's great to try to communicate better where possible, but even the best communication is not going to completely erase the issue.

jejones3141
Isaac Asimov, in one of his essays, gave an analogy for those unfamiliar with, or perhaps frightened by the scary name of, the "complex" numbers: street addresses. Should programming languages get rid of that scary name and refer to complex numbers as "addressable"?
judk
Hence why Monads have been named "Warm Fuzzy Things" in some Haskell papers about outreach.
pacala
> explaining it further as "something with an identity and an associative operation" really doesn't help much either

My 3rd grade daughter learns about associativity and identity. Is it too much to expect adults to not get all defensive over 3rd grade terminology?

catnaroek
What is so scary about monoid? A classical monoid is precisely a category with one object, hence the name. "Addable" and "Joinable" do not quite cut it - not all monoids are defined on numbers (or generalizations of them such as vectors or matrices) or sets (or generalizations of them such as categories or topological spaces).
thinkpad20
Like I said, it's a scary term, because hearing the word "monoid" conveys exactly zilch about what it is, and it sounds strange and abstract. And like I said, the definition I gave is not a precise one, but it's an intuitive one. Once you have an intuitive understanding as a starting point, you can abstract to other things.

This is just my opinion, of course.

groovy2shoes
Hearing the word "dog" conveys exactly zilch about what it is, and it sounds strange and abstract. Maybe we should call these animals "barkables".
thinkpad20
Except that it doesn't sound strange and abstract, because it's a word that everyone is familiar with. My point is about accessibility, not theoretical correctness.
groovy2shoes
My point is that a monoid is a monoid. It's an abstraction that's so basic that it cannot be broken down. It's a concept that you learn, like how you learn what a dog is or what integers, loops, functions, sets, hashmaps, etc are.

A very large part of our job is to apply abstractions. I don't often hear lawyers complaining about how accessible the name of some law is, or from doctors about how accessible the name of some disease is. I've never heard an American football player say "We should call the pistol formation something else. Calling it pistol is potentially confusing". They just learn what a pistol formation is and carry on.

As programmers, abstraction is a very large part of our job. We owe it to ourselves to learn the basics and to improve our abilities with respect to our craft, even though sometimes it's hard.

catnaroek
Welcome to engineering. We use specialized jargon to talk about concepts that laymen might not find obvious, but are indispensable for us to get our work done.
gnuvince
Most of what you learn in CS is a bunch of words you have no idea what they mean until someone gives you a precise definition. Deterministic Finite Automata, regular expression, static typing, serialization, compilation, singleton pattern, etc. are all terms we use every day in our profession, but it's not clear what they mean. But we read the definition, forget it, someone reminds us, we forget it again, we implement it and we remember. Same with monoids, functors or monads, we need to take some time to learn what they mean and then we can include them in our vocabulary.
None
None
Dewie
I think some people should at least try to get over their math-phobia. I am trying to myself, and math is an uphill battle for me even without that kind of fear. The sloppy, opaque, inconsistent and often overloaded notation is an impediment, the 'let the reader infer most of this' proofs are an impediment... but if people are turned off to such a degree over a few names, they wouldn't last long in something which is such a mathy language anyway, relatable names or not.
asdasf
You are using the exact reasoning I was talking about. Monoids are monoids. That is what they are. 99% of programmers are not familiar with them. If you call it "addable" or "joinable" or "appendable" then you are just making people think that one subset of some monoids is the definition of monoids when it isn't. They still don't know what monoids are, now they just also don't know what they are called. You are literally giving it an incorrect and misleading name. All that does is confuse people. You have to learn what monoids actually are even if you call them "addable"s. Rather than learning a misleading name for them, it is quite simple to learn a new term like "monoid". Considering there is really only 3 that people need to learn (monoid, functor, monad) this is not an overwhelming burden.
gohrt
Check yourself before you try to say that "appendable" is an inappropriate name for Monoid.

http://hackage.haskell.org/package/base-4.6.0.1/docs/Data-Mo...

    Methods
      mempty 
      mappend
      mconcat

Haskell people like their mathy terms. Mathy terms aren't universally unambigious ("group"? "ring"? "field"?), but they are mostly unambiguous within math. Haskell people tend to pretend Haskell is the same as math, ignoring the programming part of its heritage.
platz
probably because 'mappend' was defined while thinking about usage for lists, but is hardly appropriate for the general case. So, 'Appending' 5 to 3 gives 8; it could be better.
judk
The choices are to either abuse an existing word, or make up new one (possibly a homograph).

What is '+' ? Addition? Modular addition? Logical OR? Concatenaton? Sometimes, any of these.

There where always be more concepts than distinct labels, since the space of concept is exponential combination (power set) of words.

platz
I didn't say it had to be plus.

I don't have the perfect answer, but append certainly seems like choosing a specific concept , rather than trying to come up with a more general name.

asdasf
I am fully aware of what the typeclass defines. It is an inappropriate name. I don't think there is anyone who likes those names. Everyone uses <> instead of mappened. Append is inappropriate because you do not append lots of monoids, like Product and Sum for example.
tel
In practical Haskell people rarely use mappend eschewing it for the more generic (<>) operator. Personally I think it's exactly for the reason stated above—monoid is far more general than "appending".

In particular, it's easy to define a reverse monoid for any (non-commutative) monoid such that append becomes prepend. It's easy to construct monoids which have different spacial properties like Diagrams' "stacking" monoid (they have many others, too, see this entire paper http://www.cis.upenn.edu/~byorgey/pub/monoid-pearl.pdf). It's also easy to construct monoids which don't have any spatial sense at all like set union.

Peaker
Everyone hates the name mappend but it's still more commonly used than <> which iirc is relatively recent.
tel
For the record, I agree. There's a lot of older code where `mappend` is used commonly. More accurately, I should have said that (<>) has taken modern coding style by storm.
Peaker
"Addable" will not actually be more informative than "Monoid", to someone who doesn't know "Monoid".

"Monoid" will be very informative to anyone who learned it from mathematics.

A "Monoid" is a type which supports an associative operation (`m -> m -> m`) and a neutral element (`m`) which forms its identity element.

"Addable" suggests it is an "addition". Does this mean it is commutative? For the sake of preciseness, I'd hope so! (Monoids aren't commutative). Does this mean it has a negation? No. So it is not "addition", why use a misleading name for the sake of some false sense of "intuition"?

The actual explanation of what a Monoid is precisely is so short and simple, it makes no sense to try to appeal to inaccurate intuitions.

thinkpad20
That's a completely valid point of view. You're not wrong at all. I'm guessing, though, that you had learned it before from mathematics. My point is one of pragmatic, not theoretical, distinction. To those without a mathematical background (most people are not going to learn monoid unless they've studied abstract algebra), or who are less interested in mathematics in general, an obscure term like that is discouraging. I know that the Haskell community is heavily mathematical, and have little interest in "dumbing down" the language for the sake of those who are put off by theory, but it is a real tradeoff and one of the things that is likely to impede the introduction of Haskell into the mainstream.
Peaker
I've learned Monoid in Haskell, not maths. It's just so simple and easy that there's really no dumbing down necessary.

Monad is simple and hard, but Monoid is simple and easy.

thinkpad20
With respect to monoid, you're right. It's really quite simple when you get down to it. I don't have any arguments there. In fact, the fact that monoids are really so simple is kind of my point. In almost any other language, were such a thing to exist, monoids would not be called monoids but by some descriptive term which conveyed an intuitive sense of their meaning and use; it would be the purview of the mathematically inclined to write articles explaining how "actually, what we call the Joinable type class is known in abstract algebra as a Monoid, and its use extends beyond just joining things; for example..."

My point isn't really specifically about monoids; they're just an example of what often goes on in Haskell, which is that people put theory before practicality and mathematical (and hence often esoteric) definitions before practical, real-world definitions. Like I've said a few times, this isn't incorrect at all. Nor is it surprising given Haskell's origins, nor is it without purpose since it deepens your understanding of what's going on in the language. It's just a simple fact that the mathematical jargon is a turn-off to newcomers and those who don't feel they want to be forced to learn math while they're programming, or might think they're incapable of doing so.

As it turns out, I'm not one of those people; I love the mathematical side of Haskell and I love that I've learned what a Monoid is and developed an interest in type theory, category theory and all kinds of other things. But not everyone is like that, and that's the point I'm making.

Peaker
We must have different ideas about what "practicality" is.
asdasf
>In almost any other language, were such a thing to exist, monoids would not be called monoids but by some descriptive term which conveyed an intuitive sense of their meaning and use

There is no such term, that is the point. Offering up misleading terms that do not convey a sense of their meaning is much worse than a word that is unfamiliar.

>My point isn't really specifically about monoids; they're just an example of what often goes on in Haskell, which is that people put theory before practicality and mathematical (and hence often esoteric) definitions before practical, real-world definitions.

But it isn't an example of that. It is quite bizarre to see people insist that this goes on, and give examples that do not support that claim, while being fully convinced in their proof.

>As it turns out, I'm not one of those people; I love the mathematical side of Haskell and I love that I've learned what a Monoid is and developed an interest in type theory, category theory and all kinds of other things. But not everyone is like that, and that's the point I'm making.

You don't need to be like that, that is the point we're making. I am not a math person. I am not a CS person. I am a high school drop out who taught himself to code in PHP and C. I learned haskell just fine. I learned monoids and functors and monads just fine. I am no more mathematically inclined now than I was before. I know nothing of category theory, and care nothing of it. They are very general abstractions that do not reflect a narrow, specific use case, and thus do not benefit from a word that describes some narrow, specific use case.

jejones3141
Well, yeah, but... the term "monoid" already exists, and has a definite meaning. A different name might give people an intuition for it--but it will be a wrong one that they'll have to unlearn later, like the infamous burrito (not that you or anyone has suggested that monads be renamed burritos, I am happy to say!).
None
None
paulkoer
I recommend: http://blog.sigfpe.com/2006/08/you-could-have-invented-monad...

But in the end I think for most people 'understanding' the concept of monads is just something that is not to be had within a couple of hours. It takes a little bit of patience thinking about them and using them for a while.

lmm
FWIW I'll post mine: http://m50d.github.io/2013/01/16/generic-contexts.html
charlieflowers
The core problem seems to be that we just don't (yet) have good words for this particular space of abstractions. Many programmers have thought about or solved the same problem monads address, but they haven't mentally labelled the concept, and we certainly have agreed as an industry on these labels. So we're all struggling to try to talk about some things we don't have terms for.
qu1j0t3
"The core problem seems to be that we just don't (yet) have good words for this particular space of abstractions"

How about "functor," "monad," "applicative" etc.?

agumonkey
My first non haskell monad tutorial was this dorophone.blogspot.com/2011/04/deep-emacs-part-1.html‎

very 'operational', zero magic, good to get your hands dirty.

Dans' tutorial you linked is one of my favorite too.

chongli
You've probably read too many monad tutorials that make terrible analogies. Try reading this instead:

http://dev.stephendiehl.com/hask/

None
None
chongli
Why isn't it helpful to you?
carterschonwald
yes, i can comfortably say "any doc by stephen diehl is worth reading" :)
pramalin
Have you read "Monads are Elephants"? That helped me a great deal.

http://james-iry.blogspot.com/2007/09/monads-are-elephants-p...

thinkpad20
As someone who went through the same thing, my best advice is, don't read monad tutorials; just write monadic code. Reading too much can just be confusing and might make you feel like an idiot for not understanding it yet. In the beginning it might be weird, and you'll no doubt spend a great deal of time puzzling over obscure type errors, but it will eventually become intuitive, if you're actually writing code and working through it. Haskell is a theory-heavy language, but it's still a programming language, which is actually meant to do things. There's no substitute for experience.

Perhaps try going through "Write Yourself a Scheme" which uses monads from the outset, or look at "Monad Transformers Step-by-Step" (be warned though, it starts off mostly simple but then makes a sudden and somewhat jarring leap forward). Try to implement a stack with "push" and "pop" monadic operations (use this as a starting point: http://brandon.si/code/the-state-monad-a-tutorial-for-the-co... but keep in mind it's much more important to WRITE the code, and play with it, than to try to understand how it's all working just via explanations).

For what it's worth, here's my ten cent explanation of monads:

A monad is interface for containers. A type which implements this interface must have two methods, `return` which inserts an object into a container, and `bind` which says what should happen when we use the value in one container to create a new container.

platz
The container analogy does break down in some instances. For example, using monads to model a workflow. I've come to accept 'Computational Context' as the best descriptor so far.
charlieflowers
I know it's a running joke how many monad tutorials and analogies there are. And I get the humor and very much enjoy the joke.

However, I was able to gain a solid understanding of monads by working through a number of those tutorials. Over that period of time, I came to suspect, and then eventually confirm, that I had previously created my own monad for a particular purpose in C# (my case was checking the value of something in an XML tree, an attribute of an element of an element of an element, where the attribute might be missing, or its parent element might be missing, or its parent, and so on. So it was much like Bracha's ".?" sugar example).

So, monad tutorials actually (eventually) made the concept clear to me. It's fun to make fun of them, and they deserve to take a little heat, but we have to give them credit where credit is due too.

namelezz
In his talk on currying, he mentioned replying on type system to not be a good thing. Does anyone know the reasons behind his view?
latk
Currying can obfuscate what is applied to what. Consider in any ML language "a b c d" – we can see that "a" is a function, but we have no idea of its arity. Uncurried, it could be: "a(b, c, d)", "a(b, c)(d)", "a(b)(c, d)", "a(b)(c)(d)" (oh, that's the curried form again). Especially when function definitions are implied through pattern matching, it is hard to understand the contract of a function at a glance.

As a reader of that code cannot easily understand whether the number and type of arguments is correct, one has to rely on the type checker that everything will work out.

However, this is more of a criticism of ML syntax than of currying – all things are good in moderation.

mafribe
I'm not following you here. In ML-like languages a b c d is clearcut: it's means (((a b) c) d). No ambiguity whatsoever.

Bracha's critique, as usual, is missing the point.

alipang
Agreed. a b c d is a function applied to three arguments that returns a value. Functions are values, that's the whole point.

For some reason it seems parent post would like to specifically indicate the case where a function application results in a value that is specifically not a function? Seem quite strange to me.

jhaywood
That's not true. At least in SML every function only takes one argument. If a function has an arity higher than 1 it is because it take a single tuple as an argument. But you can't use the sugar for currying and tuple arguments interchangeably.
bunderbunder
It's a problem with ML syntax, but one that can easily be overcome with parentheses. Sort of like how a circumspect C programmer uses parentheses in complex mathematical expressions rather than relying on everyone being able to correctly remember complex order of operation rules.

OTOH, pipeline operators make a good case for currying. There really is something nice about being able to write

  sliceOfBread
  |> smearWith peanut-butter
  |> smearWith jelly
  |> topWith sliceOfBread
  |> cutInHalf
  |> eat
instead of

  eat(cutInHalf(topWith(sliceOfBread, smearWith(jelly, smearWith(peanut-butter, sliceOfBread)))))
groovy2shoes
I just want to point out that the pipeline operator (or, more accurately, the forward application operator), is not provided by many ML implementations, but it's trivial to define it yourself. Here it is in Standard ML:

    infix |>;
    fun x |> f = f x;
The definition is also similar in Haskell:

    x |> f = f x
And in OCaml:

    let (|>) x f = f x;;
F# and Elm provide this operator out of the box.
platz
Why isn't it used more in haskell?
vilhelm_s
For one thing, nobody could agree what it should be named...

http://thread.gmane.org/gmane.comp.lang.haskell.libraries/18...

munificent
> more of a criticism of ML syntax than of currying – all things are good in moderation.

I don't follow this. My understanding is that currying is pure syntactic sugar: it's a cheap way to expression partial application.

What am I missing?

catnaroek
It is not syntactic sugar. It is one of two demonstrably equivalent ways to emulate functions of two arguments. The demonstrable equivalence comes from the equational theory of cartesian closed categories. The need to emulate functions of more than one variable comes from the fact that only functions of one variable are a native concept.
Peaker
The word is encode, not emulate. Native support isn't any more concise, in fact it is more verbose when partial application is involved.

So why complicate the language with native support for a feature whose encoding on top of one argument functions is concise, elegant, and works well?

catnaroek
> Native support isn't any more concise, in fact it is more verbose when partial application is involved.

I only know too well. Everytime I write stuff like

    using std::placeholders;
    std::bind(foo, bar, _1, _2, baz, _3, _4);
I wish I were using an applicative language instead.
chongli
This is not a problem in Haskell, as it makes a distinction between the types of all these different functions. The uncurried forms take tuples (a distinct type) as arguments whereas the curried form does not.
catnaroek
In ML, they are all different functions as well.
codygman
> all things are good in moderation

What about poison? Rabies? Rabies in moderation actually sounds quite appealing.

latk
Two words: medicines and vaccines. Of everything there is a “too little” and a “too much”, which is especially true with programming paradigms. Some people write procedural code where OOP should be used, others abuse OOP for something that should have been done in a functional manner, and sometimes functional programs should rather be expressed with procedural code.
namelezz
Thank you for your explanation.
thinkpad20
It's actually simpler in some ways, because we know that "a" must have arity 1. What we know is that "a" should be a function which takes a "b", that "a b" should be a function which takes a "c", and "a b c" should take a "d".

As a practical consideration, this rarely if ever becomes an issue, and if it does, the type checker will tell you straight away.

Type annotations can make clear what isn't intuitively clear with a function's signature, and since the correctness of the type checker is rigorously proven, I don't see anything particularly wrong with "relying" on the type checker.

latk
The argument that every function has arity 1 is technically true (this is the whole point of currying) but is not useful when definitions like "let a b c = ..." suggest other semantics. It's possible you've had a difference experience with this, but I tend to get confused when the semantic argument list isn't delimited.

There is nothing wrong with relying on the type checker, except that it tends to add cognitive overhead.

mafribe
Every function having arity 1 is reducing complexity. It's extremely uniform, and let a b c = ... is merely syntactic sugar for let a = (lambda b. (lambda c. ...)).

It's very natural, and as Lisp/Scheme/Racket shows, it's perfectly fine in a dynamically typed context as well.

Peaker
"let a b c = ..." doesn't suggest other semantics. It's saying: "a applied to b, and then applied to c, equals ...". Note Haskell makes an effort to have the LHS of definitions imitate the exact syntax of function application. Patterns use the same syntax as data constructor applications.
thinkpad20
In my experience, the more you use currying, the more intuitive it becomes (surprise, surprise). In any case, you very quickly develop an understanding that `let foo bar baz = qux` is just syntactic sugar for `let foo = \bar -> \baz -> qux`. Of course, if you want to simulate higher-arity functions, you could just use tuples. It's perfectly acceptable to write `let foo(bar, baz) = qux`.
delinka
Can we get an [audio] indicator?
kaeluka
YES, I've been waiting for this! Thanks so much! :)
DanWaterworth
TL;DR FP hater talks about FP.
None
None
mafribe
I found Bracha's talk poor. That guy really has a chip on his shoulder vis-a-vis functional programming. A lot of things he said were not well though out. Here are some examples.

- He claimed that tail recursion could be seen as the essence of functional programming. How so?

- He complained that tail recursion has problems with debugging. Well, tail recursion throws away stack information, so it should not be a surprise. You don't get better debug information in while loops either. And you can use a 'debug' flag to get the compiler to retain the debug information (at the cost of slower execution).

- His remarks about Hindley-Milner being bad are bizarre. Exactly what is his argument?

- His claims about pattern-matching are equally poor. Yes, pattern matching does some dynamic checks, and in some sense are similar to reflection. But the types constrain what you can do, removing large classes of error possibilities. Moreover, typing of patterns can give you compile-time exhaustiveness checks. Pattern matching has various other advantages, such as locally scoped names for subcomponents of the thing you are matching against, and compile-time optimisation of matching strategies.

- He also repeatedly made fun of Milner's "well-typed programs do not go wrong", implying that Milner's statement is obviously non-sense. Had he studied Milner's "A Theory of Type Polymorphism in Programming" where the statement originated, Bracha would have learned that Milner uses a particular understanding of going wrong which does not mean complete absence of any errors whatsoever. Milner uses a peculiar meaning, and in Milner's sense, well-typed programs do indeed not go wrong.

- He also criticises patterns for not being first-class citizens. Of course first-class patterns are nice, and some languages have them, but there are performance implications of having them.

- His critique of monads was focussed on something superficial, how they are named in Haskell. But the interesting question is: are monads a good abstraction to provide in a programming language? Most languages provide special cases: C has the state monad, Java has the state and exception monad etc. There are good reasons for that.

- And yes, normal programmers could have invented monads. But they didn't. Maybe there's a message in this failure?

RyanZAG
I'm going to save these HN comments for 5 years time when the hype on functional programming has died down a bit. Will be very humorous to read this again then.
badman_ting
You're silly.
jonsen
No chance. The hype will recurse forever. Even on stackoverflow.
platz
Deploy the canaries!
Peaker
Gilad Bracha sounds like he hasn't used a typed language long enough to stop struggling with basic type errors.

As such, it is of a position of extreme ignorance that he speaks of the uselessness of type checking and inference.

Claiming Smalltalk has the best closure syntax shows he doesn't understand call by need. Haskell defines easier to use control structures than Smalltalk.

Claiming patterns don't give exhaustiveness, ignoring their extra safety shows Gilad doesn't understand patterns.

Claiming monads are about particular instances having the two monad methods, when they are about abstracting over the interface, shows Gilad doesn't understand monads.

Claiming single argument functions have the inflexibility of identical Lego bricks shows he doesn't understand the richness of function types and combinators.

In short, Gilad sounds to me very much like a charlatan who'd benefit greatly from going through lyah.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.