HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Why Isn't Functional Programming the Norm? – Richard Feldman

Metosin · Youtube · 320 HN points · 16 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Metosin's video "Why Isn't Functional Programming the Norm? – Richard Feldman".
Youtube Summary
Richard is a member of the Elm core team, the author of Elm in Action from Manning Publications, and the instructor for the Intro to Elm and Advanced Elm courses on Frontend Masters. He's been writing Elm since 2014, and is the maintainer of several open-source Elm packages including elm-test and elm-css packages.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
That doesn't mean the imperative languages are "better", though, just more popular. I recommend the following video titled "Why isn't Functional Programming the Norm?": https://www.youtube.com/watch?v=QyJZzq0v7Z4
goatlover
Sure, I'm skeptical that any paradigm or PL is superior in general, it probably all just depends on various factors.
lupire
If you want to say that popular isn't better, you have to say that with a lot of people are either making bad choices or lack agency.

Neither of which makes sense when you look at something like Git or Linux, where someone decided to make a whole new thing without dependencies, and it displaced the previous thing, and the people who use it don't need to care what language it was written in.

splitstud
None
senand
I want to say popular isn't _necessarily_ better. Your argument is a counterexample if I had said (which I didn't) that more popular were _always_ worse.

In the world of programming (and I guess elsewhere, too), there are simply many more aspects at play when "choosing" the programming language than purely how good the language itself is. Easiest example is Javascript were it's simply widespread because it was the only language available in the browser. I recommend again that you watch the video :-).

Maybe it ought to be but it definitely isn't if you look at success as industry adoption.

The author seems to have good intentions and covers all the talking points a new convert will discover on their own.

However I'm afraid an article like this will do more harm than good in the end. There are too many network effects in play that go against a new paradigm supplanting the mainstream as it is. And the benefits of functional programming pointed out in this article haven't been convincing over the last... many decades. Without large, industry success stories to back it up I'm afraid any amount of evangelism, however good the intention of the author, is going to fall before skeptical minds.

It doesn't help that of the few empirical studies done none have shown any impressive results that hold up these claims. Granted those studies are few and far between and inconclusive at best but that won't stop skeptics from using them as ammunition.

For me the real power of functional programming is that I can use mathematical reasoning on my programs and get results. It's just the way my brain works. I don't think it's superior or better than procedural, imperative programming. And heck there are some problem domains where I can't get away from thinking in a non-functional programming way.

I think the leap to structured programming was an event that is probably only going to happen once in our industry. Aside from advances in multi-core programming, which we've barely seen in the last couple of decades, I wouldn't hold out for functional programming to be the future of the mainstream. What does seem to be happening is that developments in pure functional programming are making their way to the entrenched, imperative, procedural programming languages of the world.

A good talk, Why Isn't Functional Programming the Norm?

https://www.youtube.com/watch?v=QyJZzq0v7Z4

Sep 07, 2022 · 7 points, 1 comments · submitted by snikolaev
john_the_writer
Because it solves nothing standard C++ style OO doesn't, and misses out on some of the good stuff that OO does.

Passing a map around in FP, and modifying it as it moves is virtually identical to adjusting a member_variable. Now however you can't protect the changes by having a setter.

After years in c and then c++ then rails and now elixir. I honestly don't see that there is a benefit to functional programming. My unit tests are the same, and the code is just as buggy regardless of approach.

It is never the OO vs FP that make the difference.

There's a great talk -- https://www.youtube.com/watch?v=QyJZzq0v7Z4 -- Why Isn't Functional Programming the Norm? – Richard Feldman -- that goes into this.

It's a discussion of "why" programming languages succeed.

Mar 02, 2022 · 3 points, 0 comments · submitted by fagnerbrack
Richard Feldman (Elm) analyzes why programming languages become popular and says Java is unique. It became popular because Sun spent 100's of millions of dollars promoting it, even after the dot com crash. Others like Ruby, had a killer app.

His thesis is object oriented programming wasn't inevitable or is the best way to program. It was an accident of history that OOP became so popular.

Why Isn't Functional Programming the Norm? – Richard Feldman

https://www.youtube.com/watch?v=QyJZzq0v7Z4

Another Java tidbit is that Miguel de Icaza (Gnome, Mono) really wanted an open source higher level language on Linux so he approached Sun about Java and they wouldn't work with him, so he settled on C# and is probably the man most responsible for the move to cross platform C# and .NET (dotnetcore, v5)

https://tirania.org/blog/archive/2006/May-11.html

I recently watched this talk-

Why Isn't Functional Programming the Norm? (https://youtu.be/QyJZzq0v7Z4)

Here's the HN thread- https://news.ycombinator.com/item?id=21280429

I learned in that talk, among other things that Oracle spent $500 million to promote and market Java.

- https://www.theregister.com/2003/06/09/sun_preps_500m_java_b...

- https://www.wsj.com/articles/SB105510454649518400

- https://www.techspot.com/community/topics/sun-preps-500m-jav...

shaklee3
C++ and C have spent $0 on marketing and are more popular, so I don't think this is a good indicator of success.
kaba0
Never heard of any conference starting with Cpp or the like, nor does it have a website I guess..
cjfd
That you have never heard of things does not mean they do not exist.... https://cppcon.org/
kaba0
It was sarcasm.
shaklee3
The sarcasm didn't make sense. The conference was started way, way later, by community organizers from all companies. This completely disproves that it was a major money push from one company.
Jensson
Those exists because the language is popular, they weren't created to market the language before it was popular.
kaba0
My point is that OP’s comparison is useless because Java most definitely has marketing costs mostly associated with that 8+ million Java developers, the same as other popular languages.
Jensson
How is this relevant when the topic was that Oracle literally spent $500 million marketing Java? The community of anything popular is marketing itself, yes, but that is a very different thing from having an actual marketing budget to push it to popularity.
pjmlp
This is what made C++ popular,

Apple:

https://en.wikipedia.org/wiki/Macintosh_Programmer%27s_Works...

https://en.wikipedia.org/wiki/PowerPlant

Borland:

https://en.wikipedia.org/wiki/Turbo_Vision

https://en.wikipedia.org/wiki/Object_Windows_Library

IBM:

https://en.wikipedia.org/wiki/IBM_Open_Class

Microsoft:

https://en.wikipedia.org/wiki/Microsoft_Foundation_Class_Lib...

https://en.wikipedia.org/wiki/Active_Template_Library

Jensson
The fact that you linked so many different companies is evidence that this wasn't just some push by a single company. Those things happened in parallel because the language was popular and is evidence of a vibrant community more than anything else. Being popular means many will want to make things with it yes, saying that it got popular since many people did things with it doesn't make sense.
pjmlp
Yes, because as I mentioned on the other comment you ignored, thanks UNIX, C, being born at AT&T, and the $0 cost of UNIX tooling up to the mid-80's alongside source code.

Had C++ been born somewhere else, e.g. Objective-C, and its popularity wouldn't exist.

Jensson
> Yes, because as I mentioned on the other comment you ignored

I am not the other person you responded to.

Anyway, don't you think the fact that so many others decided to copy the language and implement their own versions of it is a testament to its popularity and not just that it got pushed by a single company?

pjmlp
It was pushed by UNIX popularity.
shaklee3
It was, and still proves that a company marketing it with $500M didn't happen. Are you also going to say the same about python, or can we end this discussion?
pjmlp
UNIX was just pushed by AT&T, Sun, HP, IBM, Compaq, Dec.

I let you sum how much money they invested into selling their UNIX workstations.

As for Python, Zope made it during the early days, plus the research labs and companies that employed Guido, and if you want a list,

- DARPA funding in 1999

- Zope in 2000

- Google in 2005

- Dropbox in 2013

- Microsoft in 2020

You can sum up Guido's salary as per those corporations.

shaklee3
Exactly! Thanks for proving the point. It was a collective effort of people/companies pushing a good programming language rather than a single company (Oracle) doing it. Not to mention that they license it in certain cases.
pjmlp
Whatever.
pjmlp
Indeed, C came for $0 with UNIX that AT&T wasn't allowed to sell and provided source tapes alonside a symbolic price, in a time where systems cost several hundreds $$$$.

C++ came on the same package as C compilers, some of which it was a compiler switch away.

Both were picked by OS vendors that tried to cater on top of UNIX clones.

Yep, zero marketing.

deepsun
I thought Java was once the most popular language before Oracle bought it from Sun Microsystems.
Nursie
They bought the whole of sun!

And yes, it was, though in the intervening years they've improved the language a lot IMHO.

platz
this is also a good talk in a similar vein, although aimed at haskell programmers, is really about any technology looking to grow into the mainstream

Gabriel Gonzalez – How to market Haskell to a mainstream programmer - https://youtu.be/fNpsgTIpODA

endymi0n
It's way simpler actually.

Functional programming isn't the norm because — while it's extremely good at describing "what things are and how to describe relationships of actions on them" — it sucks at "describing what things do and describing their relationships to each other". Imperative programming has exactly the opposite balance.

I find the former to just be more valuable and applicable in 80% of real world business cases, as well as being easier to reason about.

Entity Relationship Diagrams for example are an extremely unnatural match to FP in my eyes, and they're my prime tool to model requirements engineering. Code in FP isn't structured around entities, it's structured in terms of flow. That's both a bug as well as a feature, depending on what you're working on.

Most of the external, real world out there is impure. External services, internal services, time. Same thing for anything that naturally has side effects.

If I ask an imperative programmer to turn me on three LEDs after each other, they're like: Sure, boss!

for led in range(3): led.turn_on(); time.sleep 1; led.turn_off()

If I ask an FP guy to turn me on three LEDs after each other, first they question whether that's a good idea in the first place and then they're like... "oh, because time is external to our pure little world, first we need a monad." Whoa, get me outta here!

Obviously with a healthy dose of sarcasm.

Don't get me wrong, for the cases where it makes sense, I use a purely functional language every day: it's called SQL and it's awesome despite looking like FORTRAN 77. I also really like my occasional functional constructs in manipulating sequences and streams.

But for the heavy lifting? Sure give me something that's as impure and practical as all of the rest of the world out there. I'll be done before the FP connaisseur has managed to adapt her elegant one-liner to that dirty, dirty world out there.

chii
> If I ask an FP guy to turn me on three LEDs after each other, first they question whether that's a good idea in the first place

a proper FP engineer would model the problem of turning on LEDs one ofter another as a set of states. A simple way would be a bit set of the LEDs, in an array where each element is the LED's on/off state, like ['000', '100', '110', '111'].

Then, the problem decomposes into two, simpler problems: 1) how to create the above representation, and 2) how to turn the above representation into a set of instructions to pipe into hardware (e.g., send signals down a serial cable).

The latter problem is imperative by nature, but the former - that of the representation of states, is very pure by design! So the FP model provides a solution that solves a bigger, more general problem of turning LEDs into patterns, and this solution is just one instance of a pattern.

So if your boss asks you in the future to switch the bit patterns to be odd/even (like flashing christmas lights), you can do it in 1 second, where as the imperative version will struggle to encode that in a for-loop.

scoutt
> So if your boss asks you in the future to switch the bit patterns to be odd/even (like flashing christmas lights), you can do it in 1 second, where as the imperative version will struggle to encode that in a for-loop.

I guess you are talking about embedded so I'll concentrate in the LED example. In embedded code size and performance matter, so you try to be as straightforward as you can be. And I think applying "your boss might ask you in the future" to every piece of code is what drives some development far beyond the complex point.

Should I spend a week creating a super-complex infrastructure for turning on/off some LEDs just in case my boss asks my to change the pattern? Should I spend a week thinking the right code-pattern, or trying to "solve a bigger, more general problem"? It's just 3 LEDs blinking... just write the damn for-loop!

At the end of the day, my microcontroller only "digests" sequential instructions. So the simplest thing (for embedded) is to think and feed the microcontroller with sequential instructions. All the rest is just ergonomics for the sake of programmer's comfort or taste.

I'll do the sequence. If my boss asks my to change the sequence, I'll change the sequence. It's not a big deal.

I don't know if in this case one would "struggle" modifying this particular for-loop. And I can think at least 3 five-minutes solutions in C that doesn't require FP to structure a program to quickly change the pattern if required.

iainmerrick
I promise you, a "proper" C programmer will do the same thing, faster. And I say this as an FP fan!

A bad C programmer will write horrible spaghetti code, but it will probably be enough to do the job. A bad Haskell programmer will get absolutely nowhere.

If you think pure FP is great for this stuff, I think you need to explain why imperative languages are regularly used to win the ICFP programming contest (https://en.wikipedia.org/wiki/ICFP_Programming_Contest#Prize...).

chii
> A bad Haskell programmer will get absolutely nowhere.

that's a feature, not a bug in my books! Maintaining code written by other bad programmers is the bane of my life (despite getting paid to do it, so i can't complain).

iainmerrick
Heh, there’s something in that.

Now I’m wondering what the worst mainstream language is for maintaining somebody else’s legacy code. C can definitely be pretty bad... but I’m thinking maybe Perl?

chii
you don't maintain perl - it's a write-once language. Every time you need to change it, you rewrite a new perl program to do exactly what you need ;D

the title of course, goes to javascript imho.

stepbeek
I'm at a very similar place to you at this point. It make sense for FP to be good at "describing relationships of actions" since the base unit of reasoning is a function, or an action.

The beauty of modern programming is that we don't have to stick to a pure example of either paradigm. We can use FP techniques where it makes sense and turn to imperative otherwise.

In you example, we could have a nice, purely functional model of an LED that enforces the invariants that make sense. We could then "dispatch" the updated led entity to an imperative shell that actually took the action. All without using the M-word!

I'm probably - unfairly - treating your example more seriously than you intended it, but I think I'm leading to the same conclusion as you at a slightly different place. I want to have a purely functional domain that I wrap in an imperative shell. Trying to model side-effects in a purely functional manner using something like applicative functors just doesn't give the productivity boost that I want.

> I use a purely functional language every day: it's called SQL

This is my favourite way to annoy FP advocates (despite probably being one myself). Every one is a closet mathematician in FP-land but no one wants to admit how beautiful relational algebras are.

WastingMyTime89
> I want to have a purely functional domain that I wrap in an imperative shell.

Like you, I too think the ML-side of functionnal programming got it right. Sadly, their most popular language commited the unforgivable sin of not being written by Americans and is therefore condamned to never be as popular as Haskell. I console myself by using F# when I can.

stepbeek
My FP experience has mainly been with Scala (with haskell on side projects).

Is OCaml the ML language de jure?

WastingMyTime89
Yes, it's the most actively developed and the most featurful.

F# is another interesting ML. It has less of the features which makes Ocaml interesting but it runs on .NET so you have access to a ton of library.

SML seems more niche. I don't think it sees much use outside of academia.

gpderetta
Interestingly, a lot of very popular languages are not authored by Americans:

- Guido of Python fame, is Dutch

- Stroustrup (C++) is Danish

- Lerdorf (PHP) is Danish

- Anders Hejlsberg (author of both C# and TypeScript) is also Danish

- Ruby is not as popular as it used to be, but Matz is Japanese.

- Java's Gosling is Canadian, but I'm not sure if that is the kind of American you had in mind

That covers a big chunk of Tiobe top 10. If anything Denmark is over-represented!

edit:

- Wirth (too many languages to list) is Swiss

Qem
I'd also add Roberto Ierusalimschy (Lua) and José Valim (Elixir) to this list, both from Brazil. But as a fellow commenter points out, place of birth is less important, when compared to how well the author is integrated into the anglophone old boys network of computer science.
WastingMyTime89
It's not about the nationality of author. It's about where they worked from and with whom. Except Ruby which failed, all the languages you are talking about where developed in the USA.

Van Rossum moved to the USA in the 90s, got funds from DARPA and went to work at Google quite quickly. Stroustrup developed C++ while at Bell Labs in New Jersey. Lerdorf moved to Canada as a teenager before going to work in the USA. Hejlsberg made C# and TypeScript at Microsoft in Seattle. Yukihiro Matsumoto could be an exception but as you rightfully pointed Ruby always remained somewhat niche even after its move to Heroku in San Francisco. James Gosling is Canadian but did his PhD in the USA before developing Java at Sun. Wirth did its PhD at Berkley before moving to Standford where he did most of the work on ALGOL W what would become Pascal and did multiple sabbaticals at Xerox PARC.

Tainnor
> Yukihiro Matsumoto could be an exception but as you rightfully pointed Ruby always remained somewhat niche even after its move to Heroku in San Francisco.

I'm not sure what you mean in terms of "its move to Heroku in San Francisco". Also, Ruby didn't "fail" and it's not niche (GitHub is written in RoR, as well as discourse). However, I would argue that Ruby remained relatively niche outside Japan until it was discovered by DHH and used for the Ruby on Rails framework (to this date, it's somewhat hard to find work in Ruby outside of RoR). DHH lived in Denmark at the time but moved to the US shortly thereafter.

WastingMyTime89
When I checked, it seemed that Yukihiro Matsumoto moved to San Francisco to work for Heroku but that's after developing Ruby while in Japan.

> Ruby didn't "fail" and it's not niche (GitHub is written in RoR, as well as discourse).

Ruby definitely is a niche language. I have never seen used outside of the web and it's pretty much always mentioned with RoR. That doesn't preclude success stories developed with Ruby to exist.

It failed in the sense that it has little momentum and didn't gain much traction if you compare it to something like Python. In a way, it's somewhat comparable to Ocaml which was the "failure" I was mentioning initially despite being a nice language itself and seeing interesting development right now.

Tainnor
> When I checked, it seemed that Yukihiro Matsumoto moved to San Francisco to work for Heroku but that's after developing Ruby while in Japan.

I didn't actually know that, so fair enough. Still, I think DHH probably had a larger impact in popularising Ruby in the US (and, by extension, other parts of the world).

> Ruby definitely is a niche language. I have never seen used outside of the web

That's only if you consider the web to be "niche" and if you do that, then JavaScript is "niche" too.

It's true that Ruby outside of Ruby on Rails is somewhat rare, but several other successful technologies are other written in Ruby, for example:

- Homebrew (macOS package manager)

- Chef (server provisioning software)

- Vagrant (VM provisioning software)

- Cocoapods (iOS package manager)

> It failed in the sense that it has little momentum and didn't gain much traction if you compare it to something like Python. In a way, it's somewhat comparable to Ocaml [...]

I think you're way off base.

Yes, Python is extremely popular and Ruby can't compare overall - although I have a feeling that Ruby still overtakes Python when it comes to web dev, but obviously Python is huge in other areas and is also not exactly niche in web either.

But Ruby is #13 on TIOBE, while OCaml doesn't even feature in the top 50. Github and Discourse are only examples, we could also mention Airbnb, Shopify, Kickstarter, Travis CI and many others. I've personally worked at several Ruby companies, in fact I maintain a small Ruby codebase even now at my current company (although it's not our main language), etc.

Ruby had huge momentum in the 2000s and even early 2010s. It didn't catch on in the enterprises much, true, but it was the cool thing back when everyone was annoyed at the complexity of Java EE or the mess that was PHP back then. Ruby was also the language Twitter was originally written in before they migrated to Scala. It lost a significant amount of momentum since then and basically all of the hype (people migrated to Node, then later to Elixir, Clojure and co. and some like me jumped back to statically typed languages once they became more ergonomic), but it's still maintained by quite a sizeable number of companies.

More than that, RoR had an outsized influence on the state of current backend frameworks to the point where I claim that even one of the most heavily used frameworks today, Spring Boot, takes a lot of inspiration from it (while, of course, being also very different in many areas). I would also argue that Ruby inspired Groovy, which in turn inspired Kotlin, and that internal DSLs such as RSpec also were emulated by a number other languages later.

gpderetta
Of that I agree. As discussed else thread this suggests that having a platform or sponsor is a strong contributor to a language success.
TuringTest
> I want to have a purely functional domain that I wrap in an imperative shell. Trying to model side-effects in a purely functional manner using something like applicative functors just doesn't give the productivity boost that I want.

Funtional Reactive is a very good way to create that mix. Web Front developers have realized that, and that's the reason why most modern frameworks have been veering towards this model slowly, with Promises and Observers everywhere.

When you represent state as an asynchronous stream that you process with pure functional methods, you get a straightforward model with the best of both paradigms.

stepbeek
I like FRP, but prefer to imitate it in a synchronous manner now - I mainly work on the JVM and I've personally found debugging to be too painful when working asynchronously. If I need async then FRP is definitely the first tool in the toolchest that I'd reach for.

Elixir's pipe operator is a brilliant tool that I wish every language had. I mainly use kotlin day-to-day and definitely abuse the `let` keyword to try to get closer.

TuringTest
True, FRP doesn't need to be asynchronous; it's just a very good paradigm to support multi-process computation and module composition.

As I said above, it just happens to also be a very good at handling state without a fear of side effects.

grumpyprole
People need to be familiar with both approaches. And there's sillyness on both sides. I've seen influential imperative OOP programmers on stackoverflow model a bank account using a single mutable variable, even when they surely know accountants use immutable ledgers.

Most imperative languages since Fortran contain declarative elements, otherwise we'd be adding numbers with side-effects. Similarly most FP languages offer imperative programming. But the real power from FP comes from it's restrictions and yes, query languages are one such (excellent) application. Config languages and contract languages are others.

yomly
Your LED example is an interesting one. In the basic model of a computer architecture, the screen is abstracted as a pixel array in memory - set those bits and the screen will render the pixels. The rest is hand waved as hardware.

A pixel array can be trivially modelled as a pure datastructure and then you can use the whole corpus of transformations which are the bread and butter of FP.

A screen is as IO as it comes for the most average consumers of a screen, we aren't peeking into its internals.

And for me, that's the point of FP - it's not that IO is to be avoided, it's about finding ways of separating your IO from the core logic. I loosely see the monad (as used in industry) as a formalised and more generic "functional core imperative shell"

Now when it comes to pure FP languages, they keep you honest and guide you along this paradigm. That said, it's perfectly possible to write very impure imperative Haskell - I've seen it with my own eyes in some of the biggest proprietary Haskell codebases

But imperative languages don't generally help you in the same way, if you want to do functional core imperative shell, you need a tonne of discipline and a predefined team consensus to commit to this

jacquesm
I don't think you could have made the GPs point any better for them.
yomly
I don't know. What was the GP's point? That FP people like to think too much and sometimes you just want to get stuff done?

Or that FP purists don't know how to actually build useful things? Trololol it took Haskell until the mid 90s to figure out how to do Hello World with IO

To be honest FP is a moving target but I see it as one of the mainstream frontiers of PLT crossing over into industry.

I can accept that to some, exploring FP is not a good for their business requirements today but if companies didn't keep pushing the boat with language adoption, we'd still be stuck writing fortran, cobol or even assembly.

Once upon a time lexical scoping was scoffed at as being quaint and infeasible.

Ruby and Python were also once quaint languages.

Java added lambdas in Java 8.

Rust uses HM type inference.

So what was their point? That FP people spend too much time thinking and don't know how to ship? In which case - I'm grateful that there are people out there treading alternative paths in the space of ways to write code in search for improvement.

In any case their example was pretty spurious, anyone who's written real code in production knows IO boundaries quickly descend into a mess of exception handling because things fail and that's when patterns like railway oriented programming assist developers in containing that complexity

endymi0n
q.e.d.
yomly
Would love to know what has been proved? Very up for an open and honest discussion.

I'm back to writing imperative after years of functional. I think it is a very pragmatic choice today to go with an imperative language but I find class-oriented programming to be backwards and I think functional code will yield something more robust and maintainable given how IO and failure are treated explicitly. I'm not quite sure where the balance tips between move fast but ship something unmaintainable vs moving slower but having something more robust and maintainable.

Programming in a pure language is quite radical, it's a full paradigm shift so it feels cumbersome especially if you've invested 10+ years in doing something different. I'd liken it to trying to play table tennis with your off hand in terms of discomfort. There are plenty of impure functional languages around - OCaml, Scala, Clojure, Elixir.... And Javascript (!?!?)

FP is relatively new as a discipline and still comparatively untrodden. What if equal amounts of investment occured in FP - maybe an equivalent of that ease of led.turn_on will surface.

And tbh it probably just looks like a couple of bits - one for each LED and a centralised event loop. Which so happens to have been a pattern which works quite nicely in FP but emerged in industry to build some of the most foundational things we rely on...

TeMPOraL
> In the basic model of a computer architecture, the screen is abstracted as a pixel array in memory - set those bits and the screen will render the pixels. The rest is hand waved as hardware.

It was. I still remember the days.

It was nice to be able to put pixels on the screen by poking at a 2D array directly. It simplified so much. Unfortunately, it turned out that our CPUs aren't as fast as we'd like them at this task, said array having 10^5, and then 10^6 cells - and architecture evolved in a way that exposes complex processing API for high-level operations, where the good ol' PutPixel() is one of the most expensive ones.

It's definitely a win for complex 3D games / applications, but if all you want is to draw some pixels on the screen, and think in pixels, it's not so easy these days.

jacquesm
Screen real estate size in memory increased by square law while clock speed and bus speeds increased only linearly, it was pretty clear that hardware acceleration was the way forward by the mid-eighties when the first GDPs became available. I even wrote a driver for one attached to the BBC Micro to allow all of the VDU calls to be transparently routed to the GDP for a fantastic speed increase.
mmis1000
I think functional programming is less popular simply because people just aren't good at it.

Functional programming is a good way to describe how a system works. By describe the input, the processing, the output. You describe the whole diagram.

However, in reality. People just sucks at thinking systematically. It's likely that the whole education system never taught you about how to do it.

Everyone was taught to do things step by step and smash the results together to see if it works instead of prepare everything before doing any actual work most of time. And if it actually need preparing, there is usually a pre made checklist for you to do it easily.

That type of thinking process isn't that common in our daily live. And of course no one is used to it.

But I think people should at least do it once. Even you are still programming interpretively afterwards. It could benefits you very much and make you a better programmer.

go_elmo
I agree with you but see another relevant reason: in FP, you HAVE to consider side effects 1) from the beginning and 2) completely, which as anyone can guess is quite a task.

In imperative you can just ignore it and produce objectively worse code, as you are not even aware of all side effects possible. And sure, for the LED project it wouldn't even matter, but the decision FP vs imperative is then more of a design / quality criterion in general - the notion of one being better than the other is just wrong.

Also a monad is much more complicated if you don't really understand it which makes judging it a bit unfair

jstimpfle
What is a side effect? Getting the time? Pushing a result to an output channel? A debug printf? Setting a flag to cache a computation as an optimization? Is it not: evaluating a thunk? Implicitly allocating some memory to store the result of a computation?

Haskellers are trained to have a very inflexible view of what a side effect is. It is dictated by the runtime / the type system. In my views, there are lots of things that Haskellers call "side effects" that I would just shrug my shoulder on, and also lots of things that they do not call side effects but I care about them. It really depends on the situation.

This fixed dichotomy imposed by the language does more harm than good in my experience. NB: I'm aware that running a computation that for example gets the system time will get a different time it runs. That does not mean that I _have_ to consider it a side effect. I usually do not have a good reason to run the procedure multiple times and expect the runs to be totally identical by all means. In an imperative language, I have very precise control when this procedure runs.

go_elmo
Apart from language-imposed limitations (haskell is nowhere near the theoretical completeness of category theory, e.g. bottom-type), the "pure" nature of FP forces the use of abstract structures able to handle it (e.g. Monads), thus, wanting to write code, you first need to think even possible side effects trough to be able to write code containing them, which by definition is a stronger criterion of catching unwanted effects compared to imperative, where you can produce whatever you want. And sure, it is in no way a guarantee to produce good code, it is just a stronger condition. It effectively boils down do "assembly is just as good as C", and we see where it took us.

Anyone telling me to think it through as rigorous in imperative is lying to him/her self practically, unless they're actually verifying their code.

jstimpfle
I don't know, I'm positive I'm not part of the sacred circle, but just a data point. The Haskell applications that I've managed to produce were all uniformly slow-compiling and unmaintainable. And I promise it wasn't for lack of thinking about "side effects".

In my view, the problem is that functional languages give you a toolset to compose functions (code) by connecting them in structures. In Haskell, that is made harder by the restricting type system (very limited language to do type computations) that you must champion, including a myriad of extension, which invariably lead me down to dead-end paths that I didn't know how to back out of without starting all over.

But Haskell's restricting type system aside, every programmer that I consider worth their salt has understood that it's not about the code. Good programmers worry about the data, not the code. Composing code is not a problem for me; I just write one code after the other, there isn't much else that is needed. I just think about aligning the data such that the final thing that the machine has to do is as straightforward as possible. Then the code becomes easy to write as a result.

The possibilities to design data structures in Haskell are obviously limited by its immutability. Which is, quite frankly, hilarious. "State" is almost by definition central to any computation - and Haskell tries to eliminating it (which of course is only an illusion; in practice we're bending over backwards to achieve mutability). For Haskell in particular, which does not even have record syntax, basic straightforward programming is often just not possible in my perception. I refuse to reduce a hard to use library like "Lenses" to do basic operations thing that should be _easy_ to code.

Even though Haskell is popular, and many programmers (including me) go through a Haskell phase, I haven't seen many large mature Haskell codebases (I know basically about Pandoc; and ghc if a Haskell compiler counts). Why is that?

jstimpfle
As a concrete example, here is a video (and github link) of a concrete program that I'm currently working on, and that I think is not a bad program.

https://vimeo.com/605017327

I already have plans for improving it (especially the layout system), but overall it works pretty well and is reasonably featureful with little code. It's not perfect but "state" is certainly not a problem at all.

I can't tell you something like this can't be coded in maintainable Haskell, but I can tell you that _I_ wouldn't have managed, and googling around it doesn't seem like there are a lot of people who can do it.

mmis1000
I think trying to eliminate state and replicate the unnecessary state with getter is generally a good thing. One of biggest bug category programmers encounter `forgot to sync XXX` can be totally eliminated by this if you don't copy these state at first place.

But eliminate all of them... just looks silly to me. You need state anyway, why not just write them in a sane way?

jb_s
It's been ages since I've used a "real" functional language but wouldn't it be nice to parameterise externalities like time, and have events occur "spontaneously" that force application state to update? Kind of like interrupts. Or now that I think about it... it sounds a bit like React where DOM events etc force application state to update (in a pure fashion)

Has this been done before?

So your LED function in pseudocode looks like

  ToggleLeds(leds, t): 
    for each LED
      LED.power = (LED.start + 1s) > t ? ON : OFF
And this is invoked from main() as follows

  main():
    ToggleLeds(this.LEDs, Events.Time)
Where Events.Time is some kind of event stream which allows the runtime to reevaluate main() and any other dependent functions each time it's updated

edit: And to sidestep the obvious performance issue with the function being reevaluated every few microseconds :D you would implement something like this

  main():
    ToggleLeds(this.LEDs, Events.Time(ms=1000))
dnautics
there are FPs that have and embrace side effects, not everything is haskell.
dwohnitmok
I have my share of gripes about Haskell (which I'm assuming is the language you have in mind when you're talking about a pure FP language), but even with the sarcasm disclaimer, this is a pretty extreme strawman.

This is the equivalent Haskell.

  turnOnThreeLEDs = for_ [1..3] (\i ->
    do
      LEDTurnOn
      threadDelay (10^6)
      LEDTurnOff
  )
or all in one line

  for_ [1..3] (\i -> do { LEDTurnOff; threadDelay (10^6); LEDTurnOff })
It looks basically the same.

EDIT: I would also strongly dispute the idea that FP is structured around flow instead of data structures. In fact I'd say that FP tries to reduce everything to data structures (this is most prominently found in the rhetoric in the Clojure community but it exists to varying degrees among all FP languages). Nor is SQL an FP language (the differences between logic programming a la Prolog and ultimately therefore SQL and FP is very very different).

FP's biggest drawback is that to really buy into it, you pretty much need a GC. That also puts an attendant performance cap on how fast your FP code can be. So if you really need blazing fast performance, you at least need some imperative core somewhere (although if you prefer to code in a mainly FP style, you can mainly get around this by structuring your app around a single mutable data structure and wrapping everything else in an FP layer around it).

auggierose
Well, you DID sneak a monad in there :-)
chriswarbo
It's applicative, not monadic: the actions to take are static, they don't depend on the outcome of previous steps. Notice that `Monad` doesn't appear in the type of `for_`: https://hackage.haskell.org/package/base-4.15.0.0/docs/Data-...

Rant: Having types like `IO foo`, `Maybe bar`, etc. doesn't make something "monadic"; using bind/flatMap/join is what makes something monadic. The original imperative example used 'range(3)': if that returns a list like '[1, 2, 3]', would we call that "monadic"? After all, `List` is a perfectly good Monad!

(If the example was meant to be Python3, they went a bit crazy with generators so 'range(3)' would actually return a 'range(0, 3)' object. Haskell doesn't need to make such distinctions, thanks to lazyness.)

tromp
Rather unavoidable when turnOnLED's likely type is IO () ...
auggierose
Rather unavoidable when you have types like IO() in the first place.
tromp
Use of that type is easily limited in Haskell code. For instance, in my chess counting project [1] only a few lines in Main.hs use IO (), while the other approximately thousand lines of code have nothing to do with IO ().

[1] https://github.com/tromp/ChessPositionRanking

mordae
So what?

The point is to make it easy to program imperatively (with effects, where relevant) while simultaneously reclaiming the ability to check for correctness and maintain laziness by default.

What's so good about implicit sequential evaluation? Shouldn't the effect ordering be explicit? Isn't explicit better than implicit?

auggierose
> Isn't explicit better than implicit?

That's the point, isn't it. No, explicit is not always better than implicit.

systemvoltage
What you wrote is imperative despite the fact that it’s written in Haskell.
dwohnitmok
I mean if pure FP is enough to write imperative code as well then the distinction between the two doesn't seem all that important to me. What would be a non-imperative equivalent to illustrate your idea?
Jensson
Small examples of mutating state in Haskell are about as meaningful as small examples of pure functions in Java. Small examples are really easy to do and not that ugly, bigger more complex examples don't look so nice. The whole reason people want to use Haskell is because doing complex state mutations is horribly ugly and unergonomic so people don't do it, that is a feature of the language.
TuringTest
So, you recognize imperative is a subset of functional? :-P

do-blocks have perfectly functional semantics, so if you consider that to be imperative as well, this means that a sequence of instructions changing state is both imperative and functional, as long as you declare where the state is being handled in your code.

And yes, of course functional code can handle state. The good thing about this 'Haskell imperative' style is that it doesn't fall prey of side effects, the bane of imperative programs (uncontrolled side effects are NOT a good thing). In Haskell, you control why and where you allow them.

jcelerier
One could also make a language that has exactly the same visual syntax that C where ; is specified as a functional composition operator instead of a separation of instructions. These kind of mindgames are pointless - if your code is sequencing instructions, it's imperative ; if it's denoting it's functional
TuringTest
If you do that, then you have to admit different kinds of imperative code: C-imperative style that can modify any state in the application as side effects, and Haskell-imperative where you can only modify state explicitly declared as input to the procedure.

It's not just mind games, the difference has very real implications to the architecture of the whole program and the control you can exert over unpredictable side errors.

PennRobotics
Without knowing Haskell, it looks like there are bugs in the code. Specifically, there's no delay after LEDTurnOff in the first example, and you have the same function name twice in the second example.

If those are bugs, I'd forgive that. If those AREN'T bugs, then keep me far, far away from FP!

Also, what is the point of i? Clearly, each LED should have its own index, but then i is never used again. (I understand this could be pseudocode or there's a lot of other code not included.)

And the ranges are inclusive in Haskell? I feel like a lot of friction between Matlab and Python involves how each language's indexing/slicing/ranges are represented, so it's interesting to see each language's approach (indenting like Python, lower camel case, delays in us, etc.) --- but with every language difference, I'm personally less inclined to want to learn something new without a great reason.

dwohnitmok
Ah yes you are totally right.

I misread the initial example:

  turnOnThreeLEDs = for_ [1..3] (\led ->
      do
        turnOn led
        threadDelay (10^6)
        turnOff led
  )
It should be the above (i is changed to led), where I thought the original was automatically going to a new led and didn't realize that `led` was actually an integer and `turn_on` and `turn_off` are basically pseudo-methods (or extension methods). (The original code also only sleeps after turning an LED on, not off)

Indeed the second example is a typo that should have on vs off.

  for_ [1..3] (\led -> do { turnOn led; threadDelay (10^6); turnOff led })
The joys of writing code on mobile and too much copy pasting.

`i` is the same thing as `for i in...`.

Also yes ranges are inclusive.

PennRobotics
Thanks for all the info. I want to give FP a proper try one day, and there are many different roads, but it's always a rocky start for me with a new language. Having a clear translation from one to the other is important, so I'm glad you updated this.

True that this matches the original example. I guess my mind filled in the second delay automatically when it noticed, "this isn't gonna blink to the naked eye!"

This talk by Richard Feldman is an excellent discussion about why Functional Programming is not the norm -

https://www.youtube.com/watch?v=QyJZzq0v7Z4

Jun 30, 2021 · 2 points, 0 comments · submitted by hyperpallium2
hyperpallium2
An engaging speaker, but the reasoning is often faulty (NB just talking about the arguments, not the conclusions).

  "C with classes" not popular, C++ with extras popular -> it's the extras.
From a<c, a+b>c it does not necessariky follow that b>a. b could just be the straw that broke the camel's back.

That said, I knew a few people who only liked C++ for it's non-OO features, and I myself code C in Java (for pointer, memory safety and excellent error messages). So his conclusion seems largely right, just not his reasoning.

Apr 07, 2021 · 3 points, 0 comments · submitted by manx
This is indeed a very interesting question! I’ve also asked myself that time and time again, along the path of slowly discovering what this FP thing is all about over the course of my career. It does look wonderful, it does save you a lot of effort in some cases, where “some” is a vast swathes of computational problems. And its so damn elegant.

A lot of examples are thrown around were a bunch of lambda calculus can result in 10-20 fold reduction in source code size. So where’s the catch?There has to be something wrong right?

Well recently I stumbled upon a clojure conf talk that tried to tackle precisely this question. https://youtu.be/QyJZzq0v7Z4

The short of it is that its more of a historical inertia and chance than any specific hidden gotcha. Fascinating talk.

jahaja
I'm sorry but this all strikes me as rather delusional.

The reason why functional languages doesn't take off is because programming languages are tools, not an end in themselves. Carpenters that are obsessed with perfecting their power tools would rightly be out of a job in a very short amount of time. Furthermore most of these "perfections" aren't even virtuous - in the sense of searching for improvement - but just self-indulgence and aesthetics.

Functional languages are dense, hard to read, have a high cognitive load, and are thus unproductive beyond the proverbial garage startup. They are also designed and used mostly by mentioned power tool decorators rather than practitioners, with the inevitable end result.

discreteevent
"Domain work is messy and demands a lot of complicated new knowledge that doesn’t seem to add to a computer scientist’s capabilities. Instead, the technical talent goes to work on elaborate frameworks, trying to solve domain problems with technology. Learning about and modeling the domain is left to others. Complexity in the heart of software has to be tackled head-on. To do otherwise is to risk irrelevance." - Eric Evans
seer
Totally understand the sentiment. I was very dismissive at first, as a self thought dev. Its really hard for me to learn something unless its helps directly to solve a concrete business problem I’m facing, so I know where you’re coming from.

I think my epiphany was when I was trying to learn me some clojure on a couple of weeks sabbatical from work.

As a mostly js/frontend/react dev at the time I was deep into React ecosystem for building SPAs. I knew and largely disliked the amount of code a redux based app would require in order to be “complete” with all the data fetching and state management (this was before hooks and contexts). And then I sew re-frame, that implements a lot of the features in that whole ecosystem, in like 100ish lines of code.

Thats BS I told myself. Things are missing! It cannot be just this code in front of me. But no, it was all there, just elegant and clear. All of the “meat” of the logic distilled to its essence, without any of the boilerplate.

It was just that I had to learn all the “higher level” stuff around FP, but it turns out those things are shared and repeated in lots of places, so once you index it in your brain, you can read the business logic itself, without all the additional “fat” of the code. Simply beautiful.

jahaja
> As a mostly js/frontend/react dev at the time I was deep into React ecosystem for building SPAs.

Not to be too dismissive but I think that's the problem. Modern SPAs with React et al are very bloated and imo a massive wrong turn compared with the simple boring tech that was before it, with very little gain to show for it as well.

Keep in mind that what may seem like "personality fit" may be more related to "learned imperative programming first".

As an analogy, depending on which spoken language a person learned first (as a child, _automatically), learning some new languages may be particularly difficult or quite natural.

If we had all learned functional programming concepts first, our minds would be trained to think that way. We all learned functional (declarative) math, and we don't even think about it as such. It's just math.

This video gives some plausible explanations as to why imperative programming is more common than functional: https://youtu.be/QyJZzq0v7Z4

tabtab
That may be part of the problem, but it also may be true that some minds simply work better under one paradigm than another. Since one must show proficiency in imperative to get into the field, that's a filtering mechanism that guarantees most programmers can handle imperative already. If they change to functional, some may not handle it so well, taking much longer to learn.

Observations of various organizations that tried it seem to confirm this. They have problems after the original team leaves.

This was an interesting talk on languages https://youtu.be/QyJZzq0v7Z4
Richard Feldman gave an interesting talk on this in 2019 [0].

He was trying to answer why functional programming isn't mainstream but the opposite is what we're posing here and I think it's for many of the same reasons.

Network effects: a lot of these languages were mainstream earlier on and got buy in from a lot of folks who are invested in their continued adoption and success.

Platform exclusivity: some companies push their own language ecosystems on their platforms and those languages are often OOP/multi-paradigm languages.

Specifically to OOP we have to ask why is the particular style of C++/Java OOP the norm? Why didn't Eiffel, Smalltalk, or Self reach the same market?

[0] https://www.youtube.com/watch?v=QyJZzq0v7Z4

edit: forgot the link -- oops.

formerly_proven
Functional programming was arguably the first type of higher-level PL to be invented, so it should've had the most opportunities at network effects.
Scarbutt
The problem was computers weren't ready for FP at the time, so C won.
azangru
There is also his more recent talk, The Next Paradigm Shift in Programming, obviously also praising FP, but with an even better historical overview of the previous paradigms and their contributions:

https://youtu.be/6YbK8o9rZfI

> If in the next fifty years more than, say, 3% of programmers will be programming in a language like Lean, I will eat my hat.

You're not even putting any stakes on the table. That's such a safe bet. It's unlikely you'd even have a chance at losing the odds are so stacked.

There's a great talk on why functional programming is not the norm: https://www.youtube.com/watch?v=QyJZzq0v7Z4

The short of it is: network effects. Haskell, OCaml, SML... they've never had the benefit of being the exclusive language to a popular platform like the web, macOS, or UNIX. They've not yet had a killer app like Rails or Numpy/SciPy/PyTorch. They've certainly not been foisted on the market with a war chest of marketing dollars like Java and C#. And they're not an incremental improvement on the status quo with an obvious, easy benefit.

> Now, try to explain to a programmer what kind of object id is

That's a rhetorical exercise. You can take a random snippet of APL or Fortran and the outcome would be the same. Every language requires training and effort to learn.

I teach people Haskell and TLA+. If you think teaching people the value of a language like Haskell is hard you should see the blank stares I get on the first day teaching experienced developers TLA+. Interested folks will come around eventually and have their moment but it takes training and dedication to get there.

> was chosen not with the best interests of programmers in mind

I'm well aware. I converse regularly with the maintainers of mathlib and the community. I know the focus there is on building a good tool for mathematicians.

I still think they built an interesting programming language even if that wasn't their intended purpose. That's why I started writing the hackers' guide. The documentation on the programming side is missing and needs to be filled in. They still need programmers to build the tools, the VM, the libraries, etc.

It's niche. And I'm not even the least bit surprised you find it horrible. I don't expect Lean to take off with programmers. I think the ideas in it are good and there is definitely some good things we could take from it in designing the dependently typed programming language that does penetrate the market and over-come the network effects.

Either way I think the people proposing the challenges for Haskell programmers to solve have a good reason to do so otherwise we would have a hard time finding the motivation for it. We want to see where algebraic effects go because new users often complain that monad transformers are hard to learn and users of mtl cite many problems that a well-done algebraic effect system would alleviate. Dependent types would remove the complexity of writing programs that work with types and enable us to write more programs with more expressive types. If you don't see why that's useful then what do you think the community should be focusing on?

pron
> There's a great talk on why functional programming is not the norm

It's a great talk in the sense that it satisfies people who want excuses. The real reason is similar to the one in the talk, but more disappointing: over decades, functional programming was not found to be either significantly better or significantly worse than other paradigms in productivity or other similar metrics, but for many years imperative paradigms enjoyed a significant benefit in runtime performance, and so there's really no reason to switch. I honestly couldn't care -- I like, and hate, OOP and FP equally -- but paradigms exist to serve practitioners, not the other way around.

One thing is clear: you cannot at once say you're interested in serving the research of typed lambda calculus -- a worthy research goal -- and also in advancing the state-of-the-art in the software industry, which is another worthy goal, but a very different one. If you're interested in the latter you must conduct some sort of study on the problem at hand; if you're interested in the former, then you have zero information to make a claim about the latter. In other words, you can either want to advance programming or advance typed FP; you cannot do both at once, because doing they both require very different kinds of study. If you're for typed FP, then that position is no less valid than being for OOP, or any other paradigm some people like, but it's not more valid, either.

> And they're not an incremental improvement on the status quo with an obvious, easy benefit.

... or at all.

> That's a rhetorical exercise. You can take a random snippet of APL or Fortran and the outcome would be the same.

I totally disagree here. Simplifying ideas makes you understand what is essential and what isn't, and is absolutely required before some idea can get out of the lab and become a useful technology in the field. If the most basic subroutine represents some object in a meta-universe, that even 99% of mathematicians have no clue about -- you're off to a bad start with engineers. The use of constructive type theory as a foundation is interesting to logicians who want to study the properties of constructive type theories and of proof assistants based on them. It is nothing but an unnecessary complication for software developers. As you point out, there are often some necessary complications, so there's really no need to add unnecessary ones.

BTW, you mentioned TLA+. TLA+ also requires some effort, although significantly less than Lean (or Isabelle, or Coq), but TLA+ has made a historical breakthrough in formal logic. From its inception in the late 19th century, and even from its first beginnings in the 17th, formal logic aspired to be a useful tool for practitioners. But time and again it's fallen far short of that goal, something that logicians like John von Neumann and Alan Turing bemoaned. AFAIK, TLA+ is the first powerful (i.e. with arbitrary quantification) formal logic in the history of formal logic that has gained some use among ordinary practitioners -- to paraphrase Alan Turing, among "the programmers in the street" (Turing said that formal logic failed to find use for "the mathematician in the street," and tried, unsuccessfully, to rectify that). It was no miracle: Lamport spent years trying to simplify the logic and adapt it to the actual needs of engineers, including through "field research" (others, like David Harel, have done the same). That is something that the logicians behind other proof assistants seem to be uninterested in doing, and so their failure in achieving that particular goal is unsurprising.

> Every language requires training and effort to learn. ... Interested folks will come around eventually and have their moment but it takes training and dedication to get there.

But why get there at all? I'm interested, I learned some Haskell and some Lean, I think Haskell is OK -- not much better or much worse than most languages, but not my cup of tea -- and Lean is very interesting for those interested in formal logic, but offers little to engineers; I enjoy it because I'm interested in some theoretical aspects. But why would I subject other developers who aren't particularly interested in a certain paradigm to suffer until they "get there" just for the sake of that paradigm? Again, paradigms exist to serve practitioners, not the other way around.

> If you don't see why that's useful then what do you think the community should be focusing on?

I think the community should be honest about what Haskell is: a testing ground for ideas in the research of typed functional programming. Then, it should consider measuring some results before disappearing down its own rabbit hole, where it's of little use to anyone. Algebraic effects could be an interesting experiment at this point. I don't really care about the pains of all 300 Haskell developers with monad transformers, but if Haskell tries another approach to IO, then it might eventually find a generally useful ones; same goes for using linear types instead of monads. But dependent types? They're currently at the stage when even their biggest fans find it hard to say more good things about them than bad. If Haskell wants to serve software by testing out new ideas in typed functional programming, I think it should at least focus on ones that might still be in the lab but are at least out of the tube.

agentultra
> and also in advancing the state-of-the-art in the software industry, which is another worthy goal, but a very different one.

The state of the art doesn't advance until practitioners share the results of adopting new techniques and tools. How else would the SWEBOK guide [0] have a section on unit testing if, in study, we don't have results that correlate the practice of TDD with a reduction in software errors, productivity, or some form of correctness? Is it irrational of us to practice TDD anyway?

How do we know formal methods are effective if there hasn't been a large scale study?

An interesting example of this balance between research and practice came to me from learning about the construction of cathedrals, particularly the funicular system of Guadi [1]. Instead of conventional drawings he used hanging models of the structures. It was not widely practiced or a proven method but it enabled him to reduce his models to 1:10th the scale and accurately model the forces in complex curves. As he shared his work and others studied it the state of the art advanced.

> .. or at all.

> I think Haskell is OK -- not much better or much worse than most languages, but not my cup of tea -- and Lean is very interesting for those interested in formal logic, but offers little to engineers

I think we're at, or approaching, the point where we're talking past each other.

I heard the talk from the Semantic team at Github on their experience report with algebraic effects sounded positive [2]. They are probably the largest production deployment of a code base exploiting algebraic effects. Their primary area of focus is in semantic analysis of ASTs. And yet they would like to continue pushing the state of the art with algebraic effects because, in their words, In Haskell, control flow is not dictated by the language, but by the data structures used. Algebraic effect interpreters enable them to interpret and analyze unsafe code without having to maintain buggy, difficult to reason about exception-based work flows. They say this singular feature alone is essential for rapid development.

Yes, they're doing research. However their research isn't in answering hypothesis about effect interpreters. It's directly contributing to an interesting feature in one of the largest code-sharing platforms in the world. They were able to exploit algebraic effects so they could focus on their research, their real-world problem, without having to fuss around with the limitations of conventional mainstream languages.

I do wish more companies using Haskell would share their experiences with it. It would go a long way to dispelling the pervasive view that Haskell is only good for research. It's not: it's a practical, production ready language, run time, and ecosystem that can help anyone interested in using it to tackle hard problems.

In a similar fashion I think dependent types would make working with types in Haskell easier. Instead of having to learn the different mix of language extensions you need to enable in order to work with inductive type families, it could be more like Lean, where it's a normal data declaration. Haskell introduces a lot of extra concepts at the type level because it has two different languages. I can simplify a lot of code in libraries I've written if Haskell had dependent types (and I'm working on libraries in Lean that may eventually prove it).

It may take a while but I think Haskell will eventually get there. The community has been discussing and researching it for years. It's going at the pace it is because the Haskell community is not focused on pure research: there are a lot of production applications and libraries out there and they don't want to break compatibility for the sake of answering interesting research questions. Despite your opinions on type theory there's a real, practical reason I might exploit Rank-2 types or Generalized Algebraic Data Types and it's not simply to answer a pressing hypothetical question: I've used these in anger.

I hope some of this answers your questions as to why get there at all?

[0] https://www.computer.org/education/bodies-of-knowledge/softw... [1] https://en.wikipedia.org/wiki/Church_of_Col%C3%B2nia_G%C3%BC... [2] https://github.com/github/semantic/blob/master/docs/why-hask...

pron
What I'm trying to say is simple: personal aesthetic preference, empirical results and research questions are all very important, and every one of them is a reasonable motivation for making decisions, but they are very different from another, and should not be conflated.

As to Haskell, it is definitely production-ready, and usable in a variety of circumstances. Also, its design and evolution are just as definitely primarily guided by research goals. These two simple, obvious, facts are not in opposition to one another. I have read numerous articles on why some people like Haskell, or many other languages. They are very important: some people like the tracking of "side effects"; some enjoy the succinctness. None of those articles, however, have any bearing on other questions. Which are not necessarily more important, but they are very different.

hwayne
> BTW, you mentioned TLA+. TLA+ also requires some effort, although significantly less than Lean (or Isabelle, or Coq), but TLA+ has made a historical breakthrough in formal logic. From its inception in the late 19th century, and even from its first beginnings in the 17th, formal logic aspired to be a useful tool for practitioners. But time and again it's fallen far short of that goal, something that logicians like John von Neumann and Alan Turing bemoaned. AFAIK, TLA+ is the first powerful (i.e. with arbitrary quantification) formal logic in the history of formal logic that has gained some use among ordinary practitioners -- to paraphrase Alan Turing, among "the programmers in the street" (Turing said that formal logic failed to find use for "the mathematician in the street," and tried, unsuccessfully, to rectify that).

I will bet you 100 USD that TLA+ had less than 10% of the users of Z before the AWS paper.

> It was no miracle: Lamport spent years trying to simplify the logic and adapt it to the actual needs of engineers, including through "field research" (others, like David Harel, have done the same). That is something that the logicians behind other proof assistants seem to be uninterested in doing, and so their failure in achieving that particular goal is unsurprising.

I will bet you another 100$ that the success of TLA+ has nothing to do with making it "accessible" and everything to do with AWS vouching for it. If reaching the actual needs of the engineers was so important, why is the CLI tooling in such a bad state?

pron
So what? TLA+ still achieved something that no other formal logic had before it (AFAIK). If you mean to say that Lean could do the same, I think spending a month with it will dissuade you of that notion, despite the fact that it is more similar to programming and despite it more programming-like tooling that programmers of certain backgrounds might appreciate.

Amazon may not have used TLA+ if not for TLC, but very good model checkers have existed for decades, have been used successfully in industry, and still have not made any inroads into mainstream software.

> If reaching the actual needs of the engineers was so important, why is the CLI tooling in such a bad state?

For one, resources are limited. For another, you assume that the best place to invest effort in order to help developers is in CLI tooling. I, for one, disagree. I think building a model-checker that works well is a higher priority; I also think that designing what is possibly the first "utilized" logic in the history of formal logic is also more important. Even with those two, there are other things that I think deserve more attention, because I think TLA+ has much bigger problems than how you launch some program.

I also think that some of the complaints about the tooling might be about something else altogether, but I'm not entirely sure what it is. Simply doing what beginners think they want is rarely a good strategy for product design. You get the feedback, but then you have to understand what the true issue is. Having said that, there can certainly be some reasonable disagreement over priorities.

hwayne
> So what? TLA+ still achieved something that no other formal logic had before it (AFAIK).

What I'm saying is that either TLA+ wasn't the first (Z was earlier), _or_ TLA+'s success has much less to do with anything especially accessible about it and more because a high profile company wrote about it.

> because I think TLA+ has much bigger problems than how you launch some program. [...] I also think that some of the complaints about the tooling might be about something else altogether, but I'm not entirely sure what it is. Simply doing what beginners think they want is rarely a good strategy for product design. You get the feedback, but then you have to understand what the true issue is

When I asked "what would make TLA+ easier to use", tons of people mentioned better CLI tooling. You told them all that they don't actually need that and they are thinking about TLA+ wrong. Maybe you aren't actually listening to the feedback.

I've also heard it from tons of experienced TLA+ users that they'd like better CLI tooling.

pron
> TLA+ wasn't the first (Z was earlier)

Was Z ever used outside high-assurance software (or the classroom)? BTW, while Z certainly employs a formal logic, I don't think it is a formal logic itself -- unlike TLA+ -- at least not in the common usage of the term.

> TLA+'s success has much less to do with anything especially accessible about it and more because a high profile company wrote about it.

But for a high-profile company to write about it, it first had to be used at that high-profile company for some mainstream software. My point is that being an accessible logic may not be a sufficient condition for adoption, but it is certainly a necessary one, and an extraordinary accomplishment. Both Isabelle and Coq have had some significant exposure, and yet they are still virtually unused by non-researchers or former researchers.

> When I asked "what would make TLA+ easier to use", tons of people mentioned better CLI tooling.You told them all that they don't actually need that and they are thinking about TLA+ wrong. Maybe you aren't actually listening to the feedback.

Maybe, but "listening to feedback" is not the same as accepting it at face value. The vast majority of those people were either relative beginners or not even that. To understand the feedback I need to first make sure they understand what TLA+ is and how it's supposed to be used, or else their very expectations could be misplaced. I've written my thoughts on why I thought the answer to that question is negative here: https://old.reddit.com/r/tlaplus/comments/edqf6j/an_interest... I don't make any decisions whatsoever on TLA+'s development, so I'm not the one who needs convincing, but as far as forming my personal opinion on the feedback, what I wrote there is my response to it. It's certainly possible that either my premise or my conclusion is wrong, or both. It's also possible that both are right, and still better CLI support should be a top priority. That is why I would very much like to understand the feedback in more depth. If you like, we could continue it on /r/tlaplus (or the mailing list), where it will be more easily discoverable.

> I've also heard it from tons of experienced TLA+ users that they'd like better CLI tooling.

Sure, but the question is, is that the most important thing to focus efforts on? Anyway, I'm sure that any contribution that makes any part of the TLA+ toolset more friendly, even if for a subset of users, would be welcomed and appreciated by the maintainers, as long as it doesn't interfere with what they consider more important goals.

Yes, any new language needs to start somewhere, but it’s interesting to think we need new languages for existing domains and that they have any chance of success to begin with.

Consider that by many accounts JavaScript, Java, and Python basically dominate in terms of number of developers and mindshare. These languages all arrived 25-30 years ago. I know some rankings claim C/C++ are far more popular than JavaScript but I just haven’t seen evidence of that in recent years.

Other widely used languages include C, Objective-C, C++, C# and Ruby. The first three are older than the aforementioned languages, and Ruby is from the same era as Java/JavaScript, and C# isn’t far behind that.

In terms of newer languages, the only ones I can think of offhand that seem to have serious traction are Swift and to a lesser extent Go. Some people might throw Kotlin in there, but I’m not sure how much actual traction it’s getting. Rust has been gaining traction slowly for ~8+ years and I’m hopeful it has a bright future but I don’t see a lot of actual projects using it today.

The point is that it’s really hard to succeed if your primary reason to exist is to make existing practice slightly better. It’s very hard to displace any of the existing languages in the space of general purpose programming.

You need to have libraries. You need to have at least syntax coloring support for several editors. You need language server support for VS Code. You need to generate Dwarf. You probably need a repl or website that allows interactively playing around with the language to get people started. You need enough mindshare that teams can actually hire from a pool of candidates once they get a project underway. You need a compiler with good diagnostics, books, StackOverflow answers for the top N questions people will run into, bloggers who are enthusiastic, etc.

Someone posted this on HN recently and I think the speaker does a good job of breaking down the challenges and examining why some languages have found success while others (notably functional languages) have languished on the sidelines:

Why isn’t functional programming the norm? https://www.youtube.com/watch?v=QyJZzq0v7Z4

Silhouette
The point is that it’s really hard to succeed if your primary reason to exist is to make existing practice slightly better. It’s very hard to displace any of the existing languages in the space of general purpose programming.

I don't disagree (though FWIW I do disagree with some of the points you mentioned afterwards as "needs").

This is a real problem, though. If we can't manage to adopt a tool that would make us 50% more productive than what we use today, which was the example scenario under discussion, just because of momentum, then our entire industry is going to languish in substandard results forever. We'll continue churning out code that isn't as reliable as it could be and that doesn't perform as well as it could, at vast cost to our ever more technology-dependent society, all because we couldn't get our act together and learn something new.

I am somewhat more optimistic than you seem to be about the viability of doing this. New languages don't magically become popular overnight, but within the past decade or so, we've seen the likes of Go, Rust and Swift become somewhat established, certainly enough to use for real work and build a significant community and ecosystem. In the world of web development, we've seen incremental but very significant changes in JavaScript, but also serious traction for derivatives like TypeScript and significant interest in more specialised tools like Elm.

It's worth noting that in almost all of these cases, there was a "killer application" for the language that did set it apart from what had gone before. Some of them might have become general purpose languages with time, but usually there was some more specific focus at first while they were building up a critical mass of support.

galfarragem
Stackoverflow jobs (2019-12-30) tagged as requiring:

kotlin: 161/5224 | go: 148/5224 | swift: 97/5224 | rust: 6/5224

greggirwin
Another way to look at it is that you need a reason for people to use your language, not features. C didn't have tooling to start, JS didn't have an ecosystem, VSCode is only a few years old, so there is the short view, and the long view. It's nice if you live long enough to see your work appreciated, and perhaps even benefit from it. Think true artist versus imitator. Engineers would say "Design for what people think they want, based on what they've seen before, or design for what people need, but they don't know it yet."

Familiarity helps, which is why so many langs follow historical syntactic and semantic rules; and what makes it hard on languages that are either different themselves, or target a niche domain. This includes languages whose paradigm is harder for mere mortals (the vast majority of us) to grasp.

There is also luck and timing. Backing from a big company doesn't hurt, but those langs (Swift and Go) were designed for their owners needs, not everybody else's.

There is a great irony here, which is that many lessons from the past have been forgotten, and ideas which would have help us as developers, and therefore the world, aren't widely used.

Clio doesn't look like my particular cup of tea, but they're trying to solve important problems, and I support them in that.

“Why isn’t functional programming the norm?” by Richard Feldman. Spoiler: not on the basis of merits. https://youtu.be/QyJZzq0v7Z4

“React to the future” by Jordan Walke. Why ReasonML is a logical extension of ReactJS’ programming paradigm. https://youtu.be/5fG_lyNuEAw

“Typing the untyped: soundness in gradual type systems” by Ben Weissmann. The trade offs that various gradual type systems make based on their language constraints. https://youtu.be/uJHD2xyv7xo

“Let’s program like it’s 1999” by Lee Byron. How the mutual feedback loop of abstraction, syntax and mental model drives the evolution of web technologies. https://youtu.be/vG8WpLr6y_U

Pandabob
Here's the secret agent ad by Sun which Feldman mentions [1]. These Sun ads are really quite entertaining and the production values have had to been through the roof at the time [2][3][4].

[1]: https://www.youtube.com/watch?v=NVuTBL09Dn4

[2]: https://www.youtube.com/watch?v=cfwMMI7hqns

[3]: https://www.youtube.com/watch?v=njnNVV5QNaA

[4]: https://www.youtube.com/watch?v=AP4FgXOlMh0

herbstein
> Why isn’t functional programming the norm?

This talk is very good. It's one of the few talks that I've overheard classmates talk about. It not only asks a question a lot of people exposed to functional programming at university asks, but also answers it in a way where you learn more about the world of programming and programming languages than you expected.

hardwaregeek
Maybe I'm missing something but I'm more than halfway through the "Why isn't functional programming the norm?" and it just seems to be a kind of haphazard recollection of programming language history. A lot of which isn't what I'd call entirely correct. Python's killer app was arguably first CGI scripts then data science. Java succeeded due to offering GC in a non scripting language, the JVM and possibly lots of marketing. PHP is having a mild renaissance with Laravel (not that I'd advocate for PHP, but people do seem to love Laravel).

There was quite a bit of time in between the invention of implementation inheritance and the whole "prefer composition to inheritance". It's quite possible OOP became popular due to implementation inheritance then realized it was dumb.

This info is still useful, but what I'd really love from a talk with that title is an analysis of functional programming languages and how they each missed the boat through either syntax, lack of tooling, or purity. And compare it to functional-ish languages like Rust, JavaScript, Swift and Kotlin. Then chart a way forward for function programming language adoption. Maybe that happens at the end of the talk.

kopos
A complete digress, but OOPS still shines in the domain of GUI widgets programming where there are a limited number of interfaces and a huge number of widgets (implementations) working with that interface. FP works conversely, on a limited data and a huge set of functions. Maybe in the context of now with limited gui programming, FL is more suitable?
dnautics
Functional ui is arguably saner, as react is slowly proving to junior programmers worldwide.
MaxBarraclough
> seems to be a kind of haphazard recollection of programming language history

Agree. The talk is very thin on the real differences between OOP and functional languages.

This old comment [0] points out that functional languages tend to make it far harder to reason about low-level details, for instance.

Personally I think it's more fundamental, and isn't about any such technical limitations. People have a strong intuition for time, which is emphasised in imperative languages (including OOP), which have the semicolon operator or an implicit equivalent. The concepts at play in the fundamentals of Haskell are simply harder, and 'more mathematical', than the sequenced mutation-based statements of imperative/OOP languages.

To put that more provocatively: does anyone doubt that the average Haskell programmer is smarter than the average JavaScript programmer? I'm not convinced this is just because only the curious bother to learn Haskell.

[0] https://news.ycombinator.com/item?id=21281004

In the market of ideas, there usually isn't any reason that makes the good ideas unpopular. That's the default outcome.

Some good ideas eventually make it to mainstream / conventional wisdom because there's some second factor that brings them to the spotlight. For languages it might be a killer app, big corporate investment, etc.

In https://www.youtube.com/watch?v=QyJZzq0v7Z4 Richard Feldman puts up a (well founded IMO) argument that OO became accidentally popular because of C++. C++ didn't do OO well, nor was OO the main benefit of C++, it just happened to be something that came with C++.

This is a recent talk on the topic by Richard Feldman: https://www.youtube.com/watch?v=QyJZzq0v7Z4
I know the answer, thanks to watching a recent video by Richard Feldman on why functional programming isn’t popular yet. I recommend watching it for some interesting talk about programming languages in general, it’s on YouTube at https://m.youtube.com/watch?v=QyJZzq0v7Z4

Anyway what privates give you is what everyone wants from a programming language: modularity.

We want to be able to expose an api to some code and for it to have some inner workings that we don’t see because:

1. If we know they can’t be touched we have some implicit guarantees about our program. E.g. the mouse pointer cannot go off screen even if we try to make a call to do that.

2. It is simpler for a programmer to understand the api which might be 1% of the size of the module itself with all private’s.

3. I can change my private’s without breaking your calls. Essentially a minor version change in Semvar parlance instead of a major breaking change.

4. Make impossible state impossible! By guarding the state.

All these things require modularity, and OO uses classes and private members to provide that modularity.

Elm uses modules and the exposing keyword (or lack of!) as a contrast.

JS modules are similar with a lack of an export allowing you to hide something. That something you did export can use for state or functionality. There is an old IIFE trick that works this way also.

Making everything public is a bad idea, as you miss out on modularity advantages. Not sure how Ruby hides things as I haven’t used it for a while but I’m sure there is a way.

And finally reflection in OO to access private’s is almost always a sin, unless you are make a debugging tool or similar. Canny? No way!

eyegor
To address your first point, plenty of functional langs have access control - as an example F# has complete public/protected/private for all levels of function/class.
Oct 17, 2019 · 296 points, 404 comments · submitted by gyre007
mbo
EDIT: I wrote this comment before watching the video. I stand by this comment, but the video is very good and I wholeheartedly agree with its conclusions.

As someone who writes pure FP for a living at a rather large and well known org, these threads physically hurt me. They're consistently full of bad takes from people who don't like FP, or haven't written a lick of it. Subsequently, you get judgements that are chock full of misconceptions of what FP actually is, and the pros and cons outsiders believe about FP are completely different from its practitioners. It's always some whinge about FP not mapping "to the metal", which is comical given say, Rust's derivation from what is quite functional stock.

My personal belief? We just don't teach it. Unis these days start with Python, so a lot of student's first exposure to programming is a multi-paradigm language that can't really support the higher forms of FP techniques. Sure, there may be a course that covers Haskell or a Lisp, but the majority of the teaching is conducted in C, C++, Java or Python. Grads come out with a 4 year headstart on a non-FP paradigm, why would orgs use languages and techniques that they're going to have to train new grads with from scratch?

And training people in FP is bloody time consuming. I've recorded up to 5 hours of lecture content for devs internally teaching functional Scala, which took quadruple the time to write and revise, plus the many hours in 1-on-1 contact teaching Scala and Haskell. Not a lot of people have dealt with these concepts before, and you really have to start from scratch.

jasode
>My personal belief? We just don't teach it.[...] Grads come out with a 4 year headstart on a non-FP paradigm,

I don't agree the lack of proactive education is the reason FP isn't the norm. Your conclusion doesn't take into account the counterfactuals:

- C Language took off in popularity despite BASIC/Pascal being the language more often taught in schools

- languages like PHP/Javascript/Python/Java all became popular even though prestigious schools like MIT were teaching Scheme/Lisp (before switching to Python in 2009).

You don't need school curricula to evangelize programming paradigms because history shows they weren't necessarily the trendsetters anyway.

On a related note, consider that programmers are using Git DVCS even though universities don't have formal classes on Git or distributed-version-control. How would Git's usage spread with everybody adopting it be possible if universities aren't teaching it? Indeed, new college grads often lament that schools didn't teach them real-world coding practices such as git commands.

Why does Functional Programming in particular need to be taught in schools for it to become a norm but all the other various programming topics do not?

dragonwriter
> C Language took off in popularity despite BASIC/Pascal being the language more often taught in schools

While C is less constrained, it's structurally very similar to Pascal; they don't differ in paradigm and are in the same syntax family.

mbo
> Why does functional programming need to be taught in schools but all the other various programming topics did not?

Because I think it is harder for people who have programmed with other paradigms - following an inverse law, most things should get easier to learn with experience, not harder. It's foreign, it's weird, it's back to front. It doesn't have an immediately obvious benefit to what people are used to, and the benefits it has come at scale and up against the wall of complexity (in my opinion). It's hard to adopt upfront. At the small scale it's often painful to use. The syntax is weird. It's aggressively polymorphic, reasoning in abstractions rather than concretions. I could go on (and yet I still adore it).

The only reason FP has been successful as it is, is because its evangelists are incredibly vocal, to the point of being fucking annoying sometimes. It's had to be forced down people's throats at times, and frankly, there's no better place to force a paradigm down someone's throats than at a university, where non-compliance comes at academic penalty, and when the mind is most impressionable.

z2
Exactly. As just an anecdote, my intro do FP class in university was taught by a professor who tended to rant about different levels of purity and elegance between his favorite and least favorite languages. Of course, the favorite was his pet project and we had to spend most of the class using it. I also know that Emacs is partly written in Lisp because it was the only editor he would touch.

FP can't even sell itself well in school as a language where useful things can be done, when the student is stuck in a deep valley of ___morphisms and other alien concepts with claims of aesthetic elegance as the only motivation. I recall the math nerds loved it as relief over C that the rest of the department used, but with me being rather mediocre in math, the weirdness and cult-like vibe from the prof and TA left a really bad taste. The impression was so deep that I have no issues recalling this class a decade later. I've never touched any FP since, unless you count borrowing clever lambda snippets.

mbo
The sad part is that this is a common experience - universities have done a bad job at teaching FP. I think there are good pieces of FP education, particularly Learn You a Haskell and https://github.com/data61/fp-course - friends have gone through these have questioned "why wasn't I taught like this the first time around".

> I've never touched any FP since, unless you count borrowing clever lambda snippets.

I'd urge you to give it another shot if you have spare time. Even in spite of all the dogshit things associated with it, it's a paradigm I've bet my career on.

z2
Noted--thank you!
Mirioron
>It's had to be forced down people's throats at times, and frankly, there's no better place to force a paradigm down someone's throats than at a university, where non-compliance comes at academic penalty, and when the mind is most impressionable.

That's also a great way to make people hate it. An example is literature classes with mandatory reading and how they make students hate reading fiction.

I would also say that this might turn off more students from programming. We had functional programming in uni, where we learned Haskell. Maybe a handful of students liked it or were neutral about it, the vast majority seemed to have a negative view of it.

I think that FP is just more difficult to learn. Just look at essentially any entry level programming course and how people understand loops vs recursion.

mbo
Okay, so FP is more difficult to learn. Assume for the sake of this argument that FP has a tangible benefit over other paradigms, that manifest themselves at scale. You're tasked with educating students in this paradigm, but they complain that it is more difficult than the techniques that they are used to.

What do you do?

ukj
I validate your assumption against reality.

If FP is not mandatory at Google-scale, it isn’t mandatory at your scale.

The kind of problems that emerge at scale are not the kind of problems FP tackles.

mbo
Sorry, I mean scale as in "large scale projects".

Spark is the quintessential Google-scale FP project - it was even born out of the MapReduce paper by Google!

And there's plenty of other large-scale projects that are arguably in an FP style specifically to deal with the problems associated with scaling them: the Agda/Isabelle proof checkers, the seL4 kernel, the Chisel/FIIRTL project, Erlang/OTP, the Facebook Anti-Spam system (built on Haxl), Jane Street's massive investment into OCaml, Twitter's investment into Scala.

Not all scale problems are distributed problems. Some distributed problems are tackled by FP, and some aren't. Ultimately, these large-scale projects pop up at similar rates to the usage of the language themselves. It's intellectually dishonest to say that FP can't be used to tackle large scale problems, and the problems that occur at scale, because its repeatedly validated that it can.

ukj
It is also intellectually dishonest to strawman an argument.

I didn’t say it is impossible to do X with FP - I said it is not necessary to do X in FP. You can convince yourself of that by looking for larger-scale non-FP counter-examples to the ones you've cherry-picked.

Every single large scale problem is a distributed problem simply because human societies are multi-agent distributed systems and programming languages are human-computer interfaces.

The issues at scale are systemic and arise from the design trade-offs made when your system's requirements bumps against the limits of computation. No language/paradigm can work around those.

The best a language can do is abstract-away the complexities behind a problem - solve it once (in the language/paradigm's preferred way) and give the human an interface/concept to work with.

mbo
Okay, I got really confused by this whole mandatory thing. I never said FP should be mandatory at scale. I said it had a moderate benefit at scale. You respond with "well actually, it's not mandatory at Google-scale" so I assumed that you were trying to refute the fact that FP has benefit at scale.

You also followed this up with

> The kind of problems that emerge at scale are not the kind of problems FP tackles.

I cherry picked these examples to demonstrate that you're completely talking out of your ass here.

> I didn’t say it is impossible to do X with FP - I said it is not necessary to do X in FP. You can convince yourself of that by looking for larger-scale non-FP counter-examples to the ones you've cherry-picked.

I never said it wasn't possible to tackle these problems without FP.

You need to get rid or the assumption that "if X is better than Y at task Z, everyone will use X rather than Y for task Z". You've used that line of logic to attempt to invalidate FP's capabilities. It simply does not make sense.

ahartmetz
You give them difficult real-world problems where FP is helpful.

But university computer science seems to be specialized from mathematics instead of generalized from engineering, so CS professors most of the time have no idea about real world problems. At least here in Germany, where the problem seems especially bad.

Mirioron
I don't know, because I'm not qualified for it. I had to pass a course on FP, but frankly, I wouldn't be able to do anything with it in practice, let alone teach it. My only personal experiences with it were negative. If it had been Haskell that was the entry level programming course, then I probably would never have learned to program.
mbo
Okay, so given this answer here's what I would do:

1) I wouldn't make it the entry level course. It's clearly a paradigm that's used by a minority of people, so it doesn't make sense to start educating students with it.

2) I mandate that all students take it, maybe in their 3rd year. We're going to mandate it because there are tangible benefits (which we've assumed for the sake of this argument). They're going to find it harder and more confusing because its's different to what they're used to. A lot of them may not like it and won't see immediate benefits. Some may even come to dislike it. Frankly, I don't care, some will pick it up and learn about it further. And when the students that disliked it inevitably run it into the future, they sufficiently prepared to deal with it.

We're back to square 1: forcing it down student's throats. If you still think that we shouldn't be forcing students to learn FP in schools, I think you have a problem not with FP but with structured curriculums.

Mirioron
I was essentially in situation #2. I doubt there were more than a handful of students who would feel that they would be able to deal with FP based on what they learned at uni. I think it's more likely that when they run into this in the future they're immediately turned off by it.
jasode
>Because I think it is harder for people who have programmed with other paradigms

I still think there's something missing in your theory of cause-&-effect. A math topic like quaternions is hard and yet programmers in domains like 3d graphics and games have embraced it more than FP.

I also think Deep Learning / Machine Learning / Artificial Intelligence is even more difficult than Functional Programming and it seems like Deep Learning (e.g. Tensorflow, Pytorch, etc) will spread throughout the computer industry much more than FP. Just because the topic is hard can't be the defining reason.

>The only reason FP has been successful as it is, is because its evangelists are incredibly vocal,

But why is FP in particular only successful because of loud evangelists? Why can't FP's benefits be obvious so that it doesn't require evangelists? Hypothetical example:

- Company X's software using FP techniques is 10x smaller code base, 10x less bugs, and 10x faster feature development -- than Company Y. Ergo, this is why Company X is worth $10 billion while Company Y is only worth $1 billion or bankrupt.

If you think the above would be an unrealistic and therefore unfair comparison, keep in mind the above productivity improvement happened with the industry transition from assembly language to C Language. (A well-known example being 1980s WordPerfect being written in pure assembly language while MS Word was written in C Language. MS Word was iterating faster. WordPerfect eventually saw how assembly was holding them back and finally migrated to C but it was too late.) Yes, there's still some assembly language programming but it's niche and overshadowed in use by higher-level languages like C/C++.

If Functional Programming isn't demonstrating a similar real world massive productivity improvement to Imperative Programming, why is that? I don't think it's college classes. (Again, see all the non-PhD enthusiasts jumping on the free FastAI classes and brushing up on Linear Algebra to teach themselves deep learning.)

mbo
> Why can't FP's benefits be obvious so that it doesn't require evangelists?

Because there aren't immediate benefits. They only pop out at scale and with complexity, as I said.

> similar real world massive productivity improvement to Imperative Programming

Because there isn't. It's a reasonable benefit, but it's not transformative. I think it's there, enough to commit to FP completely, but the massive productivity improvement doesn't exist, or at least, only exists in specific cases, e.g. the WhatsApp + Erlang + 50 engineers parable (you could argue that this is due to the actor model and BEAM, rather than FP. An argument for a different day).

I feel like this hard + reasonable benefit isn't really efficient utilisation of people's time, especially when there's things like Deep Learning floating around. I think the immediate reaction to a lot of what FP evangelists claim is a shrug and a "I guess, but why bother?"

petra
>> Because there aren't immediate benefits. They only pop out at scale and with complexity, as I said.

What about low-barrier situation with scale and complexity ?

An imaginary situation:let's say you start building your system from a large open-source project that needs a lot of customization.

Will FP be of big enough benefit than ?

I'm curios about the answer, but for a sec, let's assume it does:

Than could it be a uni project ? dig into the belly of 2 beast projects, one FP, one OOP. And see the difference in what you could achieve.

Could something like that work ?

ummonk
>Subsequently, you get judgements that are chock full of misconceptions of what FP actually is

I put the blame for that squarely on the Haskell cultists. They've caused everyone to have the impression that functional programming needs to have esoteric syntax and lazy evaluation by default.

It's like how the Java cultists have ruined OOP.

DubiousPusher
I think you nailed it. But FP proponents have done a fair bit of harm to the paradigm itself in that some of the more osutspoken proponents have been pretty alienating. They've pushed the language superiority argument to death. They should probably be focused on pushing the style in the hopes that the next round of popular languages begins to implement FP style as first class features. Which is of course what the author alludes to and I think is actually happening. Which would jive with history.

A lot of what became OO language features arose because people were already using the style in non-OO languages. C being a great example of how you can use a ton of C++ like features, you best end up writing a lot of boiler plate to hook up function pointers and the like.

Going back further we see features in C were being implemented by assembly prigrammers as macro assembly. So the pattern the author puts forward has basically held true for multiple shifts in programming paradigms.

Which leaves me with one point of contention with the presenter. That OO dominance is not happenstance. And neither was the fact that lots of people were writing OO style C. There is something about OOP that helped people think about their code easier. That's why they adopted the features. Maybe not everything about it was great and we're learning that. But it genuinely helped people. Just as English language oriented languages helped people over ASM.

markandrewj
Personally, I get frustrated that there seems to be a belief that you can only use FP or OOP, when the reality is both models can be used in conjunction, and there may be reasons to choose one over the other dependent on what you are doing. Not to mention there are other models such as Protocol Oriented Programming. You see this in languages like Swift.
mbo
The issues is that you get benefits for sticking to one paradigm, because then everything is made of the same stuff.

If everything is an object, then you can use all your tooling that works with objects, which is everything. If everything is pure, you get easy parallelism. If everything is an actor, you get easy distributability. If everything is a monad or function, you get easy compositionality. The list goes on. Smalltalk, Erlang and Haskell are languages with very dedicated fan bases, which I theorise is because they went all in on their chosen paradigm.

markandrewj
It's true there are benefits to working with in one paradigm, but I find that often you can only do this for so long. This is why you have things like redux-thunk.
repolfx
You don't really get easy parallelism, do you? People used to theorise this, but auto-parallelisation never really worked because it's hard to understand - even for a machine - where the cost/benefit tradeoff of parallelism starts to happen. Applying a pure function to a massive list is simply not a common pattern outside of things like multimedia codecs or 3D graphics, where these industries settled on explicitly expressing the data parallelism using SIMD or shaders a long time ago. Functional languages have nothing to add there.
ggregoire
> Unis these days start with Python

This is insane to me. I'm dealing all day with developers who started with Python at the uni and don't know anything else and actually don't want to learn anything else... and they have a terrible level in programming and software engineering. And it's near impossible to have a serious discussion about features available in every other languages but Python. I mean, if you never used a compiler in your life, how could you know the benefits that brings a compiler?

I started at the uni with Java. Sure Java is verbose and OOP is complicated and not the right paradigm for every problem (why would you need a class to write a hello world), and I'd definitely not choose Java for starting a new project nowadays, but Java taught me a lot of fundamentals and I'd definitely go with Java if I had to teach programming.

stillwater56
I'm just starting to dabble in Scala, and as someone who has actually created lecture content, is there a resource you would particularly recommend?
mbo
New members of our team are greeted with a "Hi, my name is mbo. Here's your desk, here's your copy of The Red Book, good luck."

https://www.manning.com/books/functional-programming-in-scal...

gambler
What a coincidence. This sounds exactly like what happens with OOP. Every discussion here gets swarmed with clueless people who think Java is the apex of OO programming, because that's what gets taught in universities these days. They don't understand any of the core concepts that prompted Xerox Park to develop the OO paradigm in the first place. They aren't aware of any of the relevant research. They operate under the assumption that OO was born out complete ignorance of functional programming, even though people who kick-started its rise were keenly aware of Lisp (for example, Alan Kay frequently references McCarthy's work and research papers in his talks). Etc, etc.
senderista
I think the Smalltalk paradigm is deeply defective and the Actor model (the purest form of OOP to my mind) remedies most of its faults but perpetuates some others. A few flaws:

- Modeling all communication as synchronous message-passing. Some communication (such as evaluating mathematical functions) is naturally modeled as synchronous procedure calls, while communication which is naturally modeled as message-passing should be asynchronous by default (to address unpredictable latency, partial failure, etc.).

- Emphasizing implementation inheritance as the primary means of code reuse. This is now generally acknowledged to be a mistake, so I won't elaborate.

- Deferring all method resolution to runtime. This makes the amazing introspective and dynamic capabilities of Smalltalk possible, but it also makes it impossible to statically verify programs for type-correctness.

- Relying on mutable local state rather than explicit, externalized state. This is controversial, and it's a defect of the Actor model as well (yes, passing new parameters into a tail-recursive message receive loop is equivalent to mutating local state). The partisans of OOP and the Actor model believe this to be a virtue, enabling robust emergent collective behavior from small autonomous software agents, but it makes predicting large-scale behavior difficult and debugging nearly impossible.

gambler
All of this has been addressed zillion times. Modern Smalltalk dialects have actor libraries and have code reuse mechanisms that don't involve inheritance. There are ways of doing static analysis on late-bound code. (Obviously, guarantees are not going to be the same. I take that as a reasonable trade-off.) OOP isn't predicated on mutable state and there are ways for the system to manage it anyway. (Although, to be fair - that is one thing from the list that hasn't been fully addressed in any practical OOP system I'm aware of.)

https://tonyg.github.io/squeak-actors/

http://scholarworks.sjsu.edu/cgi/viewcontent.cgi?article=123...

http://scg.unibe.ch/archive/papers/Scha03aTraits.pdf

http://web.media.mit.edu/~lieber/Lieberary/OOP/Delegation/De...

http://bracha.org/pluggableTypesPosition.pdf

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.134...

Even if none of the work I mentioned above existed, this sort of criticism is amateurish at best. Real engineering requires considering trade-offs in real-life contexts. For example, compile-time checks aren't going to help you figure out that some vendor supplies incorrect data via a web service. An early prototype, however, can do exactly that.

lazulicurio
There was an article on state in OOP posted here a few days ago that I found very thought-provoking[1]. The blog post and related youtube videos are pretty interesting as well[2][3][4].

[1] https://news.ycombinator.com/item?id=21238802

[2] https://medium.com/@brianwill/object-oriented-programming-a-...

[3] https://www.youtube.com/watch?v=QM1iUe6IofM

[4] https://www.youtube.com/watch?v=IRTfhkiAqPw

thomascgalvin
> Every discussion here gets swarmed with clueless people who think Java is the apex of OO programming, because that's what gets taught in universities these days.

Realistically, Java (or something very much like it) is the apex of OOP, at least as most people will experience it. The Ur-example of OOP might be a beautiful, internally consistent vision of mathematical purity, but most of us will never experience it.

Similarly, Agile-fall is the most common form of Agile that people will experience, which is why we always fall into "no true Scotsman" territory when ~~arguing about~~ discussing it.

There is, I think, a disconnect between people who are primarily concerned with the beauty of software - simple models, elegant algorithms, and so on - and the people who are primarily concerned with getting their feature branch merged to master so their manager will let them go to their kid's soccer game.

The beauty of software is important, and there's value in trying to bring the useful, but more esoteric concepts of CS into the mainstream, but at the same time we need to be aware of the ground truth of software development.

caseymarquis
This makes me appreciate working somewhere that feels kids' soccer games will always take precedence over merging features. I also use hybrid actors/oop extensively. I'd never really considered that these probably go hand in hand.
gambler
>Realistically, Java (or something very much like it) is the apex of OOP [...]

By this logic Java Streams are the apex of functional programming and anyone who uses them is fully qualified to dismiss the paradigm, even if they don't know anything about proper functional languages.

joe_the_user
The thing about these discussions is they seem to have two different questions mixed together. One question is "what's the best way to produce good software in a circumstance where everyone start on top of their game and the right thing" and the other is "what's a sort-of natural, slightly better average, way that programming can be." The answer to the first can be "Good functional Programming" or "Good OOP" or maybe something even more exotic. But it doesn't matter that much for the second question. Because usually the question of "how can this code we have already be better" mean "how do you take a general mess and make salvagable?" I don't know what the FP's answer to this is. I generally get the feel the FP people don't want to have an answer to this because then there'd be a lot of better-but-still-bad installations out there. But there are plenty answers to improvement using OOP - most say just encapsulate everything or similar things. All sorts of things can be thinly encapsulated, better, but still far from good. That seems to me explain the prevalence of OOP.
paulddraper
What are the best OOP languages? Smalltalk?

For FP I would reply Haskell/PureScript, OCaml, and Scheme.

gambler
I don't know what's the best OOP language right now, but Pharo is pretty enjoyable to work with.
hootbootscoot
Good OOP is good FP - they swim in the same water. composition, generics/adt's, elegance, encapsulation...

In it's ideal form it's about taming the metal like a trick pony.

I'm nowhere near that level, being a fortran in any language sorta guy lol, but when I see well-built stuff, I take notes. Matryushka dolls, lol

Kay et al were swimming in the same water as Hewitt etc, conceptually. He said that the key takeaway from OOP was objects passing messages (actor model), not so much the inheritance story (the composition)

but yes, they all criss-cross there

lewisjoe
Richard Gabriel’s famous essay “Worse is better” (https://www.jwz.org/doc/worse-is-better.html) is an interesting perspective on why Lisp lost to C. In a way, the same arguments (simplicity vs consistency vs correctness vs completeness) can be made for why functional programming lost to OOP.

But those philosophical perspectives aside, personally I find my brain works very much like a Turing Machine, when dealing with complex problems. Apart from my code, even most of my todos are simple step-by-step instructions to achieve something. It’s easily understandable why like me, other non-math folks would prefer a Turing Machine over Lambda Calculus’ way of writing instructions.

This could be why OOP/Imperative was often preferred over FP.

hyperpallium
Even in maths, I find a solution in terms of the problem easier to understand than one in terms of the previous step.

Even when the recursive form is a more natural representation, like arithmetic sequences: start at s, increase by d with each step:

  a(0) = s, a(n) = a(n-1)+d

  a(n) = s + n*d
The analytical form seems simpler, neater, more "right" and more efficient to me - even though, if you want the whole sequence, the recursive form is more efficient (given tail-call optimisation).

I suspect I'm just not smart enough.

fp can be much shorter, and the execution model isn't actually hidden, just unfamiliar (and unintuitive and unnatural - for me). Consider: all suffixes of a list. In jq:

  while( length>0; .[1:] )
jt2190
Lately I’ve been thinking that a lot of code style debates center around an explicit versus implicit axis. Imperative is more explicit, and, in one sense, easier to see what’s going on since it lays everything out step by step. On the other hand, those same steps are a mix of essential steps (that deal with the problem being solved) and accidental steps (that deal with computer and code in order to get the job done.)

It seems to me that OOP, Functional, and Relational programming models try to abstract away the accidental steps, but like all abstractions there are limitations.

I suspect that once familiar with one of these models, imperative seems awfully tedious, however now the code is more obscure to those not well versed in the paradigm, thus we have a trade off between ease of use for many and optimal for some.

hyperpallium
Absolutely, explicit vs implicit is part of imperative vs. functional. And doing more with less information is elegant - and has a deeper significance in terms of Occam's Razor, that simplicity tends to closer to the truth, and therefore generalizes better. And, like pg's take, shorter code means less code to write, to read, to modify.

There can be leakage, when the given model is not perfectly accurate, and you need the true implementation details (this also happens for imperative code - it can be very helpful to have source of libraries) - in debugging, in performance, in working out how to do things.

But I feel a general issue is that it might not be a good fit for the human code processing system... Our interactions in the real world are more like imperative programming - not just familiarity, but how we evolved. This issue is similar to how quantum physics and relativity aren't a good match to the human physics system, which seems to be the mechanical/contact theory. To convert things to recursion is like working out an inductive proof - you can do it, but it is harder and more work than just getting it done in the first place.

A specific issue about this is that functional recursion is based on the previous step, whereas imperative code is usually based on the starting step. Like, build a new list at each recursion vs. indices into the input list. The latter is easier because it's always the same thing being indexed, instead of changing with each recursion.

hyperpallium
Implicit also means a tradeoff in estimating performance, an instance of an abstraction leaking.

I've been trying to think of a totally clean functional abstraction, i.e. that's functional under the hood, but there's no way to tell. Perhaps in a compiler?

ummonk
This doesn't look to me like the difference between functional and imperative so much as the difference between recursion / iteration and map / list comprehension.
hyperpallium
You may need to exercise some charity here.

I've been trying to see why fp isn't intuitive for me.

I suspect it's like a second (human) language acquired as an adult: only those with a talent for language (maybe 5%?) can become fluent with practice.

Regarding my first example, I see recursion (or induction) as the essence of fp; and the recurrence form of arithmetic sequences is the simplest recursion I've seen used in mathematics.

The explicit form in that example is harder to justify as "imperative". But a commonality of imperative style is referring to the original input, rather than a previous step (see the first line of my above comment). This isn't the literal meaning of "imperative", but may be a key distinction between fp and ip style - the one that causes the intuitive/fluency issue for me.

To illustrate using my third (jq) example of suffixes, here's an "imperative" version, in py-like psuedocode:

  for i = 1 to length
    # a suffix
    for j = i to length
      print a[j]
    print \n
This is so much longer than jq (though shorter if used .[j:]), but it is how I understand the problem, at first and most easily.

It always refers to the starting input of the problem, not the previous step, and this might be why it's easier for me.

I'm interested in your comment - could you elaborate please? There's a few ways to relate your comment to different parts of mine, and I'm not sure which one was intended.

ummonk
Well I agree with you that that kind of recurrence (which mathematicians love to use so much, as do some functional programmers who're overly influenced by math) is not very intuitive and frankly is a programming anti-pattern in my view.

But I disagree with you that recursion is the essence of fp. For your concrete example, a more functional version of doing that (in Python) would be something like:

  print("\n".join(a[i:] for i in range(len(a)))
No need to reuse f(i-1) when you can express f(i) directly.

Reusing the previous step (whether it is using recursion, rising intermediate computations in the form of local variables in a loop, or through a fold) should only be done when absolutely necessary.

hyperpallium
> [recurrence] is not very intuitive and frankly is a programming anti-pattern in my view. [...] Reusing the previous step ... should only be done when absolutely necessary

Thanks, that's my main concern (fp was just an example). Would you agree the reason it is bad is becase there is more to track mentally in the execution model? (i.e. the intermediate results).

I think a complex execution model is problematic in general (it sounds obvious when I say it that way).

> which mathematicians love to use so much,

hmm... I was thinking "induction", and believed that fp is the same. .. > But I disagree with you that recursion is the essence of fp

This is BTW now, but that statement surprises me. Can you elaborate? What is the essencd of fp (has it one)?

Is your py version "more functional"? I'm so wedded to the idea that fp=recursion that that's the reason it doesn't seem functional to me. What makes it functional? Just that it's a nested expression (i.e. fn calls)?

htfu
Well I guess you could say the essence of FP is working recursively without (mostly) thinking of it, and not having to deal with the sort of control flow necessary for either loops or the kind of self-administered recursion you seem to think of.

The .join() taking the iterator in their example is, if you look closer, very much a fold/reduce repeatedly invoking a join of the thus far assembled string, the next part, and \n. Recursion!

Also rather than mutable i/j variables being incremented (albeit implicitly so in your example), generating a list of all numbers on which to run.

xamuel
>I suspect I'm just not smart enough

Nah, I have a PhD in math and I agree with you completely. Imperative is way better. And most mathematicians agree with me. You can see this by cracking open any actual math or logic journal and looking how they write pseudocode (yes, pseudocode: where things like performance don't matter one tiny little bit). You'll see they're almost entirely imperative. Sometimes they even use GOTO!

jjav
Agreed. I arrived at programming through math (B.S. in Mathematics) and have no love for FP. At the end of the day all software (except hobby projects) is mostly about maintaining it. FP adds unnecessary complexity, abstraction and obfuscations. None of those qualities help code maintenance.
Hercuros
Is this view of FP based on actual experience maintaining a non-trivial program written in an FP language? In my experience, FP doesn’t necessarily add a lot of unnecessary complexity. Sure, languages like Haskell are perhaps initially a bit more abstract when learning them, but once you know the basics, you can write pretty straightforward code in it. You can also do crazier things, but there is no need for that in most software.

Keeping a functional style, regardless of the language (although FP languages lend themselves better to this) can help in keeping code more decoupled, since you have to be explicit about side effects.

I think that both FP and imperative languages have places where they shine, and I freely switch between them depending on the project. Given how much some imperative languages have recently borrowed from FP languages, I think that this shows that functional programming has some significant merits.

xamuel
>You can also do crazier things, but there is no need for that in most software.

For dev teams of sufficiently large size, a general principle is: whatever crazy things the language allows, someone is going to do and commit into the codebase.

hyperpallium
In an old textbook I haven't been able to find again (browsing in another uni's library) regarding the Entscheidungsproblem I read that Church wrote to Turing, saying he thought the Turing Machine waa a more convincing/natural/intuitive representation of how mathematicians thought about algorithms than his own lambda calculus.

Maybe he was just being modest, or like John McCarthy, just didn't see or believe its potential.

Note that this was before computers or programming, and that there's no formal proof that a Turning machine can encode any computation - so its convincingness was important.

TulliusCicero
This is correct. Everyone I've met that insisted that functional programming is superior to imperative has been a big time math/CS nerd, the kind that goes to grad school and was confused when the iPad launched because hey it does nothing that a laptop doesn't already do!

My experience doing functional programming is that hurt my brain, it just doesn't map as cleanly to how I think of things happening compared to imperative programming. It was just really painful to code in, and most of my classmates had the same opinion.

Hercuros
It’s mostly a matter of practice. I think that many people’s experience of functional programming is a (potentially poorly-taught) university course, during which there is not really enough time to really become comfortable with functional programming. Maybe it’s true that the learning curve is a bit (a lot?) steeper, though. But once you are comfortable with it, it’s not significantly more difficult than writing code in Java or Python. I also think that it’s worth learning even just for the sake of becoming a better programmer. It teaches you to think in a different way, which also translates to programming in imperative languages.
slig
Beware: JWZ doesn't like people visiting his website from HN.
dspillett
Ah yes, I didn't remeber at first why that domain was added to my hosts blacklist.
geitir
The fact to he took the time to do that shows who the real man-child is
Ygg2
That, or he hates the HN hug of death.
dspillett
Nah, just having a problem with the hug of death would be an explanation for redirecting to a polite static message saying "sorry, my site can't handle the load when HN links to it". What he has done instead is excessively diskish.
wil421
What has he done? Everyone’s commenting he doesn’t like HN but when I clicked the link everything looks fine. Serious question.
robjan
You must be using Brave or a browser plugin which doesn't send referral headers. If you use a normal browser, it displays a testicle in an egg cup with a silly phrase complaining about the demographic of HN users.
wil421
I’m using iOS safari with AdGuard. It’s probably AdGuard.
dorfsmay
I open everything for which I don't need to be logged in, in an incognito window, and this page worked fine.
dspillett
An incognito window doesn't quite count as "if you use a normal browser". Unless your not using incognito is the unusual case for you, which it isn't for most users.

Given a choice between changing my browsing behaviour to see his content or just blocking it so it (the testicle redirect or the other content) will never both my vision again, I go for the latter option.

jasode
I believe some who click on the url are getting redirected to a png image file:

https://web.archive.org/web/20191014203443/https://www.jwz.o...

For those who don't see the image, the bold text in the png says:

"A DDOs MADE OF FINANCE-OBSESSED MAN-CHILDREN AND BROGRAMMERS"

rgoulter
It redirects to an image with a hairy testicle and gives a low opinion of HN readers: https://web.archive.org/web/20191014203443/https://www.jwz.o...
gitgud
The fact that it's the only site I've seen that demonstrates the ability to read HTTP referral headers from hacker news shows who the real hacker is...
krapp
Please. The "real hackers" are proxying their requests and sending custom headers to begin with.
dtech
Not sure if serious, but looking at referral headers is commonplace and trivial
zygimantasdev
Personally my thinking changes from Turing Machine to more math like with each year I do functional programming
hinkley
Lisp lost in a much more profound way recently, and it's very rare to see anyone mention it, especially on the Lisp side of the conversation.

Over the last 10 years or so, we have come to the painful conclusion that mixing data and code is a very, very bad idea. It can't be done safely. We are putting functionality into processors and operating systems to expressly disallow this behavior.

If that conclusion is true, then Lisp is broken by design. There is no fixing it. It likes conflating the two, which means it can't be trusted.

None
None
u801e
> Apart from my code, even most of my todos are simple step-by-step instructions to achieve something.

> [...]

> This could be why OOP/Imperative was often preferred over FP.

Though this doesn't really explain why OOP is preferred over imperative (since the former doesn't really correspond to a set of step-by-step instructions).

pryffwyd
The latest no-OOP imperative language with any kind of market share is C. So everything that's terrible about C: unsafe manual memory, portability issues, tooling, no generics or overloading, horrible ergonomics, 2nd class functions, globals everywhere, etc, are all forever associated with imperative programming. OOP was born at the time of fixing all those problems, so those languages were a big improvement in ways that had nothing to do with OOP. Now that all the top languages are multi-paradigm, only a puritan would avoid OOP, and they'd have a tough time interacting with libraries and frameworks. So every codebase is a little wishy-washy and OOP wins by default. Imperative has no advocates left in industry or academia, so most people don't even think of it as a defensible design decision.
fetbaffe
One language that was not on the presenters list is SQL, very popular, but not OO nor functional.

One thing lot of programmers do is to abstract SQL to OO style, even though SQL describes a relation that can be computed to a result, in some way similar to a function, but it seems that most prefer to look at is has a state, even though it doesn't.

Sure, the tables where data is stored has a state, but the sum of the tables is a relationship in time & depending how you look at it you get different results. It is very hard to map relationships to OO correctly.

It is probably easier for most people to think about the world as set of things rater than a relation in time. Many of our natural languages are organized around things.

watwut
The link you shared now leads to this when clicked on hn: https://web.archive.org/web/20191014203443/https://www.jwz.o...

When copied and pasted into next tab it leads to article.

meijer
The link is NSFW.
gowld
OOP is nothing like a Turing Machine.
strangenessak
> personally I find my brain works very much like a Turing Machine

Exactly this. How baking a cake in FP looks like:

* A cake is a hot cake that has been cooled on a damp tea towel, where a hot cake is a prepared cake that has been baked in a preheated oven for 30 minutes.

* A preheated oven is an oven that has been heated to 175 degrees C.

* A prepared cake is batter that has been poured into prepared pans, where batter is mixture that has chopped walnuts stirred in. Where mixture is butter, white sugar and brown sugar that has been creamed in a large bowl until light and fluffy

Taken from here: https://probablydance.com/2016/02/27/functional-programming-...

6gvONxR4sf7o
Okay, so first of all this is an excellent joke. But it's not that great of an analogy.

This quote chooses one of many FP syntaxes. It's cherry picking. It uses "a = b where c = d." That's equivalent to "let c = d in a = b." Let will allow you to write things like:

    let
        cake_ingredients = [butter, white sugar, sugar]
        batter = cream(ingredients=cake_ingredients,
                       dish=large_bowl,
                       condition=LIGHT_AND_FLUFFY)
        prepped_pans = pans_full_of(batter)
        oven = preheat(your_over, 175 C)
        cake = bake(cake, 30 minutes)
    in
        dessert_tonight = cooled(cake)
This isn't where FP and imperative are different.

What's really different is that the let statement doesn't define execution order. That's not so relevant to this part of the mental modeling though.

I think it's great that I can choose between "let ... in ..." or "... where ...". In real life, for a complex bit of language, I happen to often like putting the main point at the top (like a thesis statement), then progressively giving more details. Mix and match however's clear.

falcolas
Perhaps it's the analogy leaking, but in baking, order of operations matters, and some operations must be done in parallel (pre-heating, based on initial oven state) to produce a good end product.
nybble41
Yes, and this is one of the areas where functional programming really shines. An imperative program is defined as a series of ordered steps and the compiler can't (in general) reorder steps to optimize use of resources because the steps could have arbitrary side-effects.[1] The FP version is essentially a dependency graph which constrains the order of operations without mandating a specific final order. The pre-heated oven is needed for baking but not for the batter, so these parts can automatically be evaluated in parallel just by enabling the multithreaded runtime.[2]

[1] Certain primitive operations can be reordered but that depends on the compiler having access to the entire program. A call to a shared library function is an effective optimization barrier for any access to non-local data due to potential side effects.

[2] For the purpose of this example I'm assuming the unused `oven` variable was meant to be passed in to the `bake` function.

nice_byte
> the compiler can't (in general) reorder steps to optimize use of resources

i'm not sure what you mean by that because compilers reorder instructions to improve performance all the time (and CPUs do it dynamically too).

nybble41
I mean that an imperative program spells out a particular order of operations and the compiler is forced to reverse-engineer the dependencies based on its (usually incomplete) knowledge of each step's side effects. When the potential side effects are unknown, such as for calls to shared library functions, system calls, or access to shared global data, or any call to a function outside the current compilation unit in the absence of link-time optimization, then it must preserve the original order even if that order is less than optimal.

The kind of reordering you see in imperative programs tends to be on the small scale, affecting only nearby primitive operations within a single thread. You don't generally see imperative compilers automatically farming out large sections of the program onto separate threads to be evaluated in parallel. That is something that only really becomes practical when you can be sure that the evaluation of one part won't affect any other part, i.e. in a language with referential transparency.

repolfx
Compilers and CPUs only reorder over tiny instruction windows. He's talking about re-orderings over enormous windows, in a way that requires whole program analysis.

But that doesn't really happen in reality. FP languages promised auto-parallelisation for decades and never delivered. Plus you can get it in imperative languages too - like with Java's parallel streams. But I never see a parallel stream in real use.

nybble41
It's not completely automatic but it is fairly close. If you enable the threaded runtime then "sparks" will be evaluated in parallel. You do have to say which expressions you want evaluated as separate "sparks" with the `par` operator but that's it—the runtime manages the threads, and common sub-expressions shared by multiple sparks will typically be evaluated only once. There are no race conditions or other typically concurrency issues to worry about since the language guarantees the absence of side effects. (That is the biggest difference between this and Java's parallel streams: If the reduction operation isn't a pure function then the result is undefined, and there isn't anything at the language level in Java to enforce this requirement.)

EDIT: An example of effective parallelism in Haskell:

    import Control.Parallel (par)

    fib n
       | n < 2   = 1
       | n >= 15 = b `par` a `seq` a + b
       | True    = a + b
       where a = fib (n-2); b = fib (n-1)

    main = print $ map fib [0..39]
Note that the implementation of `fib` has been deliberately pessimized to simulate an expensive computation. The only difference from the non-parallel version is the use of `par` and `seq` to hint that the two operands should be evaluated in parallel when n >= 15. These hints cannot change the result, only the evaluation strategy. Compile and link with "-threaded -with-rtsopts=-N" and this will automatically take advantage of multiple cores. (1C=9.9s elapsed; 2C=5.4s; 3C=4s; 4C=3.5s)
repolfx
Yeah, I know how it works, and the level of automation is the same in all modern languages - as you note, Java's equivalent of "par" is writing ".parallelStream()" instead of ".stream()" so no real difference, beyond the language enforced immutability.

But it doesn't actually matter. How often is parallelStream used in reality? Basically never. I would find the arguments of FP developers convincing if I was constantly encountering stories of people who really wanted to use parallelStream but kept encountering bugs where they made a thinko and accidentally mutated shared state until they gave up in frustration and just went back to the old ways. I'd find it convincing if I'd had that experience also. In practice, avoiding shared state over the kind of problems automated parallelism is used for is very easy and comes naturally. I've used parallel streams only rarely, and actually never in a real shipping program I think, but when I did I was fully aware of what mutable state might be shared and it wasn't an issue.

The real problem with this kind of parallelism is that it's too coarse grained and even writing par or parallelStream is too much mental overhead, because you often can't easily predict when it'll be a win vs a loss. For instance you might write a program expecting the list of inputs to usually be around 100 items: probably not worth parallelising, so you ignore it or try it and discover the program got slower. Then one day a user runs it on 100 million items. The parallelism could have helped there, but there's no mechanism to decide whether to use it or not automatically, so in practice it wasn't used.

Automatic vectorisation attacked this problem from a different angle and did make some good progress over time. But that just creates a different problem - you really need the performance but apparently small changes can perturb the optimisations for unclear reasons, so there's an invisible performance cliff. The Java guys pushed auto-vectorisation for years but have now given up on it (sorta) and are exposing explicit SIMD APIs.

dragonwriter
> How baking a cake in FP looks like:

> * A cake is a hot cake that [...]

The difference between a functional programmer and an imperative programmer is an imperative programmer looks at that and says “yeah, great takedown of FP”, while a functional programmer says, “what’s with the unbounded recursion?”

But, more seriously, it's long been established that real programming benefits from use of both imperative and declarative (the latter including—but not limited to—functional) idioms, which is why mainstream imperative OO languages have for more than decade importing functional features at a mad clip, and why functional languages have historically either been impure (e.g., Lisp and ML and many of their descendants) or included embedded syntax sugar that supports expressing imperative sequences using more conventionally imperative idioms (e.g., Haskell do-notation.)

The difference is that embedding functional idioms in imperative languages often requires warnings about what you can and cannot do safely to data without causing chaos, while imperative embeddings in functional code have no such problems.

antisemiotic
And then you actually try to write it in a functional language, and end up with something like:

cake = map (cool . bake 30 175) . splitIntoPans $ mix [ butter, sugar, walnuts ]

p33p
I think partial application and pipe operators make this so very intuitive though:

[butter, sugar, walnuts] |> mix() |> splitIntoPans(pans = 3) |> bake(time = 30, temp = 175) |> cool(time = 5)

strangenessak
We can improve the syntax further

    [butter, sugar, walnuts]
    mix()
    splitIntoPans(pans = 3)
    bake(time = 30, temp = 175)
    cool(time = 5)
Hmm, wait a second.....
antisemiotic
Careful, somewhere along that line you might even come to a conclusion that Haskell is world's most advanced imperative language, with the reprogrammable semicolons and whatnot.
twic
Or, to coin a phrase: https://apps.dtic.mil/dtic/tr/fulltext/u2/a030751.pdf
meijer
But this doesn't handle the state. It is not working imperativ code.
pkilgore

    [butter, sugar, walnuts]
    ^^^
     Somewhere wanted type CakeIngredients but missing record field "Flour"

If imperative style programming came with type inference on the level of the Ocaml compiler sign me up. For now, though, I can spare a few cycles in exchange for correct programs.
thih9
If you want to bake a cake, FP like this could seem awkward.

But what if you want to run a bakery and split the work across multiple cooks? In that case it helps to have clearly defined ingredients.

I'm only trying to say that it all depends on the context. Obviously personal preference is a big factor too.

chii
but now that you've written the cake baking data type, with a little small tweak, you've got a bread baking data type.
jacobush
Haha, that sounds like the C++ inheritance joke.
marvin
True, but what if you never wanted bread?
Torwald
I'd rather have a baking class that takes an argument for what I want to bake, either bread or cake, and spares me the details of how baking is done. I don't have to know that a preheated oven is one that is at 175 grades etc
TomMarius
But then your cake might easily burn.
EpicEng
And when your oven has a problem with it's heating element you'll have no idea why your cake didn't turn out well. We're supposed to be engineers, right? Learning how things work is good.
Torwald
My comment was supposed to be a joke about the vernacular in which OO tends to get presented.
missosoup
I'll find it more intuitive to do both as an imperative series of steps.

Some of my friends are in love with FP. I am not. I've done more FP than most, I can work with it, but my brain has never become in tune with it. I can bang out my intent as imperative code in real time, but with FP I have to stop and think to translate.

FP also means that I can't always easily tell the runtime complexity of what I'm writing and there's a complex black box between my code and the metal.

Maybe some of my friends' brains are superior and can think in FP, all the more power to them. but empirical evidence is that most people are not capable of that, so FP will probably forever remain in the shadow of imperative programming.

pas
Do you think of types and transformations between types when you write imperative code?

I mean usually the problem in FP is that you simply can't type mutation (you'd have to use dependent types and so on), okay, so use immutability, great, but then every "step" is just some franken-type-partial-whatever. And TypeScript has great support for these (first of all it infers a lot, but you can use nice type combinators to safeguard that you get what you wanted).

I don't like pure FP exactly because of this, because many times you have to use some very complicated constellation of concepts to be able to represent a specific data flow / computation / transformation / data structure. Whereas in TS / Scala you just have a nice escape hatch.

jmilloy
Baking a cake is like being a compiler and a processor for recipe instructions. Of course it seems awkward from the perspective of a human baker because before you can process/bake you have to "compile" the expression to procedural steps. The computer does that without complaint.

This may illustrate that humans aren't good compilers of functional code, or in particular that humans aren't good at parsing poorly formatted functional code (again, computer parsers don't care about formatting). But I don't think it indicates that functional code isn't good for reading and writing, even for the same humans.

I also don't think this recipe resembles FP. Where are the functions and their arguments? There is no visible hieararchy. It is unnecessarily obtuse in the first place.

amrrs
Same example of baking a cake to explain functional programming in R by Hadley Wickham. A good presentation.

https://speakerdeck.com/hadley/the-joy-of-functional-program...

u801e
You should read the OOP version of "for want of a nail" proverb near the end of this post (http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...).
soulofmischief
> In any case the point is this: I had some straight imperative code that was doing the same thing several times. In order to make it generic I couldn’t just introduce a loop around the repeated code, but I had to completely change the control flow. There is too much puzzle solving here. In fact I didn’t solve this the first time I tried. In my first attempt I ended up with something far too complicated and then just left the code in the original form. Only after coming back to the problem a few days later did I come up with the simple solution above.

There are two kinds of people, I guess. To me, this description simply encapsulates the process of being a programmer. Boo hoo, you had to think a little bit and come back later to a hard problem in order to figure it out.

I'm sorry, but that's literally how every profession which requires engineering skills plays out. And like other professions, after you solve a problem once you don't have to solve the problem again. It's solved. The next template Gabriel writes in that flavor will not take nearly as long.

Seriously, all of these points he raises against FP are entirely contrived, and come across as the meaningless complaining of an uninspired programmer.

AnimalMuppet
"It doesn't fit the way I think" != "I'm too stupid or lazy to figure it out".

And why should s/he do so? Between the language and the programmer, which one is the tool? Should not the tool fit the human, and not the other way around?

FP fits the way some people think. It doesn't fit the way others think. And that's fine. It's not a defect that some people think that way, and it's not a defect that some people don't.

temp1999
To the second question, when you work in the industry you realize the answer is often the programmer.

Edit: There were a lot of questions in that comment.

AnimalMuppet
I agree that it often works out that way... but it shouldn't.
soulofmischief
I think the whole conversation is silly; FP is another tool in my toolbox. Yes, with some effort I can accomplish most jobs with a crowbar, but why would I do that?
AnIdiotOnTheNet
Am I the only one who's a little disturbed that this cake recipe contains no eggs, flour, or baking powder [0]?

[0] nor any equivalents to provide structure or leavening.

denton-scratch
Never seen that before, thanks! It's very funny.

I can't write Lisp to save my life, but I know roughly how you're supposed to do it.

stereolambda
It's a good analogy! But it also shows that in FP you have to specify what is needed for what and what happens why. If you wrote that imperatively, you could include steps like "go outside, count clouds, return" or "place a metal bowl next to everything, put some cereals inside". And then never return to that bowl again, just leave it like that. The programmer wanted to use this bowl for something, but then forgot it was there.

And then, when someone returns to that code, they have no idea that these steps are unnecessary and why each step was taken. (Or maybe they are necessary, because cloud counting ensures there is time for ingredients to permeate?). So probably these steps will be left and more mess and jungle will accumulate.

mbrock
I actually don't know of any functional programming languages that don't have syntactic and semantic support for writing step-by-step algorithms.
stingraycharles
The same can be said about imperative languages supporting FP concepts: they have it, but it's just not the same.
pas
Could you elaborate on this a bit? Basically calling a functions form an other is how a step-by-step algorithm would work in FP, no? And pattern match on what comes in, and return an immutable copy.

For example you can put functions in a list, and push a datastructure through them, like a pipeline.

edit: https://probablydance.com/2016/02/27/functional-programming-...

gowld
You miscounted the number of negatives in the comment you replied to.
pyrale
The control structure that takes the different functions/values and glue them together is what makes your code imperative or descriptive. While there is a lot of overlap between descriptive style and fp, it is not always the case.

In haskell, for instance, the do notation lets you write imperative code:

    f  article = do 
       x <- getPrices article
       y <- book article
       finishDeal article x y
...and then the compiler desugars it to a more descriptive form.
gmfawcett
In fairness, we could be in the List monad here, and this would effectively be a list comprehension rather than an imperative program. Even if we are in IO, `getPrices` and `book` may never execute --- even `finishDeal` may never execute! --- depending on non-local details that aren't shown here.

The code certainly "looks imperative" but it's still a declarative program --- the semantics are rather different from what a typical "imperative programmer" would expect.

cryptica
OOP was designed to prioritize encapsulation at the expense of referential transparency. Functional programming was designed to prioritize referential transparency at the expense of encapsulation.

You cannot have referential transparency and encapsulation at the same time.

In order to prevent mutations (which is a requirement of FP), a module cannot hold any state internally; this necessarily means that the state must be passed to each module action from the outside. If state has to be passed to each module action from the outside, then this necessarily means that the outside logic needs to be aware of which state is associated with which action of which child module. If higher level modules need to be aware of all the relationships between the logic and state of all lower level (child) modules, that is called 'leaky abstraction' and is a clear violation of encapsulation.

Encapsulation (AKA 'blackboxing') is a very important concept in software development. Large complex programs need to have replaceable parts and this requires encapsulation. The goal is to minimize the complexity of the contact areas between different components; the simpler the contact areas, the more interchangeable the components will be. It's like Lego blocks; all the different shapes connect to each other using the same simple interface; this gives you maximum composability.

Real world software applications need to manage and process complex state and the best way to achieve this is by dividing the state into simple fragments and allowing each fragment to be collocated with the logic that is responsible for mutating it.

If you design your programs such that your modules have clear separation of concerns, then figuring out which module is responsible for which state should be a trivial matter.

stickfigure
I don't quite follow this. You can create (very complicated) immutable objects that encapsulate their state, and provide methods that return new immutable objects with different - and still fully encapsulated - state. Vavr is a good example.
cryptica
Yes you can reduce and map a large state object into a smaller and simpler object before you pass it down to a child component but the encapsulation is still leaky because the parent component needs to know the implementation details of each action of a child component in order to use them correctly (for example, the parent component needs to know how the different actions of the child relate to each other in terms of how they affect the state); it creates a large contact area between components which creates tight coupling.

The idea of blackboxing/encapsulation is that the parent component should know as little as possible about the implementation of its child components.

senderista
I think some of the spirit of encapsulation (i.e., decoupling from implementation) is achieved by polymorphic functional interfaces, e.g. type classes in Haskell.
nybble41
Do you have a more concrete example? So far as I can see there is no reason why functional programming would require a parent component to know anything about how its child components interact with the state. The tight coupling you are describing sounds completely foreign to me as a Haskell programmer.
proc0
" the more interchangeable the components will be. It's like Lego blocks; "

This is precisely the reason why pure FP is prioritizing referential transparency. Even if objects are perfectly encapsulated, with enough complexity, because other objects will depend on that information, and because that information mutates and changes over time, this is bound to cause some errors.

Compilers can't check program correctness because of the halting problem, so FP aims to give the programmer some patterns + laws to help better reason across this "higher" dimension of moving parts.

cryptica
>> Even if objects are perfectly encapsulated, with enough complexity, because other objects will depend on that information, ...

I would argue that when other objects from different parts of the code depend on the same state and there is no clear hierarchy or data flow direction between those objects, then that is going to cause problems regardless of whether the language is OOP or FP. The problems will manifest themselves in different ways but it will be messy and difficult to debug in either case (FP or OOP) because this is an architectural problem and not a programming problem. It will require a refactoring.

OOP helps to reduce architectural problems like this because it encourages developers to break logic up into modules which have distinct, non-overlapping concerns.

ummonk
In my view, the new hooks paradigm in React combines the best of both worlds in FP and OOP.
cannabis_sam
Encapsulation is trivial in FP tho.
overgard
Encapsulation is desirable because it limits the possibility space of what can operate on a set of data. Referential transparency is desirable because pure programs are much easier to reason about. If I understand what youre saying, it seems youre saying referential transparency and encapsulation are at odds and encapsulation is more valuable, but I disagree. Hiding state maybe keeps things tidy and enforces that you need to use the API, but its not really the point IMO. The point of encapsulation is managing state mutations. Hiding state is only a small part. You dont need to hide state as much when its immutable because then you don’t need to care what other code is doing with your emitted data structures because it doesn’t effect you.
cryptica
Encapsulation doesn't necessarily mean hiding state. It means hiding the implementation details of how actions mutate the state. The same action called on a different kind of module instance can end up mutating the instance's internal state in a completely different way. The higher level logic should not be concerned with how a module performs an action.
bpyne
I think we need to get past the point of believing in some FP revolution in which enlightenment happens and people suddenly switch to Haskell, OCaml, Clojure, etc. FP is happening in a more evolutionary way with newer languages like Kotlin, Scala, F#, etc. taking ideas from Haskell, SML, and Lisp.

I'm not pretending to be the first to state this observation but I feel like it needs reinforcement here.

rjkennedy98
Don't forget JavaScript! ES6 did wonders for functional programming in JS.
overgard
I think most of the growth of FP is coming from libraries and hybrid languages. Things like React and Redux and streams/linq style operations on data structures, or default immutability. I don’t think “pure” languages will ever really become dominant but a lot of the best ideas are being borrowed.
gpderetta
I do not pretend to be a particularly skilled programmer, but in my not so long career I have picked up a bunch of tools: a few algorithms here and there, some data structures, some programming techniques like encapsulation, late binding, higher order functions, pipelines, various form of polymorphism (dynamic, static, ad hoc, inheritance based, structural or whatever), some concurrency patterns (message passing, shared memory, whatever). I end up using whatever seems more appropriate to me for a specific problem depending on intuition, personal preference and experience.

Now, various subsets of the items above have been labeled with different names (functional, procedural, OOO, generic, whatever), but of course most of the time no two people can agree on which subset deserves which label.

I must not be the only one, because a lot (but not all) of very successful languages are not very opinionated and let people mix and match bits as needed.

yodsanklai
I didn't listen to the video, but the title raises questions. What is functional programming? Nowadays, most languages are multi-paradigm, it's not so clear what is functional programming (or a functional programming language).

For instance, it's very common to have data types with mutable state in OCaml, or to use non-mutable data-structures, closures, higher-order functions in let say Python. I don't see such a clear dichotomy between functional/non-functional programming languages anymore.

Besides, there are other language "features" that I feel have more impact on the code I write. For instance, static/dynamic typing, asynchronous I/O vs actors vs threads, module systems.

I see functional programming more as a tool and a programming discipline, well-suited to solve some problems, rather than a paradigm that one should adhere no matter what.

pyrale
The talk actually takes time to answer these questions.

The title is also a bit clickbaity because the talk acknowledges that fp as a style is becoming common.

onion2k
JavaScript isn't a functional language itself, but you can use a functional library like lodash/fp (https://github.com/lodash/lodash/wiki/FP-Guide) on top of it to get all that lovely functional goodness in your frontend and node code. Using lodash/fp has made my frontend state management code a lot nicer, and I'm really only just starting out with it.
mikekchar
Personally, I think JS is a fine functional language. Like you say, it doesn't have a good FP style system library, but it doesn't have a good anything style system library ;-) My main complaint is that currying is awkward. One thing I have discovered, though, is that closures are relatively slow (that is, unoptimised) in most implementations. In several implementations, they can also leak memory in certain circumstances. There is a very old outstanding bug in V8 that gives a lot of details about this... unfortunately I'm not in a position at the moment to go spelunking for it (I wish I'd saved a link...)

Anyway, I've done quite a few fairly large katas in JS using only an FP style of coding without any dependencies and I really enjoyed it.

verttii
Personally I find functional style awkward in js. Mostly because data is not immutable, there are no functional operators (composition, application etc.) and no algebraic data types + pattern matching.

But most importantly, prototypal inheritance, in other words invoking the object's own methods as if they were pure functions is what really puts me off.

erik_seaberg
Object.freeze does give you shallow immutability in javascript, though it makes setters fail without throwing.
mikekchar
A lot of that stuff is pretty modern in terms of FP, though. It's definitely not a pure functional language, though quite a lot of the data is actually immutable. It was really surprising to me that strings are immutable (but as the other commenter pointed out, they don't throw if you try to mutate them, so it's not that convenient). "Objects" and arrays are mutable, but it's pretty easy to avoid mutating them if you want to.

It probably wasn't clear, but the reason I didn't use any dependencies is because I was avoiding JS's built in inheritance mechanism, which I don't think is very compatible with FP. You can build objects out of closures and build your own object oriented mechanisms if you want. Unfortunately you run into the limitations of the implementations I mentioned.

I always hesitate to link to my own "fun" code, but just so you understand that I was not looking for code quality in this: https://gitlab.com/mikekchar/testy But it shows how I was using an FP style in JS to implement an alternative OO system. I really had fun writing this code and would have continued if there weren't problems with using closures in JS.

Edit: If you look at this, it's probably best to start here: https://gitlab.com/mikekchar/testy/blob/master/design.md

I really should link that in the README...

Roboprog
Look into Ramda.js if you haven’t yet. It adds partial function application / currying capabilities, as well as composition support.

E.g.

https://ramdajs.com/docs/#partial

https://ramdajs.com/docs/#pipe

Borkdude
The video explains how JavaScript started out as a Scheme-dialect (Lisp) but for marketing reasons they chose a more Java-like syntax and adopted Java into the name.
catalogia
Javascript falls short of scheme in ways more substantial than java-like vs s-expression syntax. It also has one of the worst numerical towers ever put into a language (that being: "lmao everything is a float".) Also, "function-scope" is an abomination compared to proper lexical scoping.

Edit: I forgot to also mention: weak typing was an awful idea.

galfarragem
An historical mistake that humanity is paying (and will pay) for a long time.

Scheme 'cloths' was a viable option. Lisp remains the most popular scripting language among Autocad users despite Autodesk pushing other languages (.NET and JS). So popular that Autocad clones use it also as a scripting language[1].

Edited [1] https://www.zwsoft.com/zwcad/features#Dynamic-Block

daliusd
I was working for bigest Autocad competitor for more than 10 years and never had to touch anything similar to lisp :)
lisptw102019
If my guess is right, that's because your company's product had their own proprietary language (MDL).

But, that was OK too, because if my guess is right, your company's product also had FAR FAR FAR better COM bindings than Autocad did for 99% of what you'd want to automate.

ben509
Was it a mistake, though? Languages have to be accessible to their audience, and Javascript caught on because of its relatively gentle learning curve.

If SchemeScript hadn't caught on, it might have been that VBScript took over the web.

hyperpallium
Insufficiently C-like languages get ignored, according to: http://james-iry.blogspot.com/2009/05/brief-incomplete-and-m...
doboyy
Language designers and enthusiats will forever be disappointed at how many social and human factors are at play, which coincidentally, is a large part of the motivation for programming languages
TomMarius
There is excellent TypeScript lib called fp-ts
nobleach
I love everything except the docs. The docs rarely show practical usage. Most functions are just type definitions. This is a MAJOR blocker to anyone that might not have a mathematical background. I find this issue quite a bit with the FP world though. The old joke, "as soon as you understand monads, you instantly become unable to explain them" sort of holds true here. How would I know what I should use? You just have to know.

Just this morning, I had to resort to Stack Overflow for using an Either... a concept I thought I well understood. Turns out, the way I've done it in Scala might not be the norm.

Many programmers coming to this library are coming from JavaScript, so expecting them to understand some (or many) of these things, might not be the right approach. The author has gone to some great lengths to blog about the foundations of FP... so this might help a bit. I just wish the docs were fleshed out with more examples. (the repo is open sources.... I could put up or shut up here)

Tade0
You may want to consider Ramda.js instead: https://ramdajs.com/

IMHO does a better job than Lodash, because:

1. All functions are automatically curried.

2. The order of parameters lends itself to composition.

EDIT: 3. Transducers.

enlyth
I never understood the point of Ramda. It's like it's trying to replace the core functionality of JS with something that's completely orthogonal to what the language actually is, but it's just a bolted on library.

I've worked on codebases where people ignore all built-in JS functions (like Array.map/filter) and write Ramda spaghetti instead with multiple nested pipes and lens and what not to show off their FP purism.

Most of the time, you don't need any of this, it just makes the codebase unreadable, and hard for new people to join the project and be productive in a timely fashion.

tashoecraft
I only reach for Ramda when the built in Js functions can not accomplish what I want to do, or it would be very messy/hard to reason about. Now that’s very subjective, but I hate when I see people using lodash/ramda map to just map over an array. There’s a lot you can do with the built in methods and they will probably be the superior option
wdroz
Maybe to avoid questionable behaviors, like in the article posted 3 months ago "Why ['1', '7', '11'].map(parseInt) returns [1, NaN, 3] in JavaScript" [0]

[0] - https://news.ycombinator.com/item?id=20242852

Roboprog
Things like Ramda and jquery help avoid fighting against IE and other browser nuances, as well.

Even when there seem to be native functions/ methods to do something. That’s where the monster lives.

Roboprog
You stick with your “polyfill”, I’ll stick with mine , so to speak.

It’s nice to be able to make a function as concisely as something like:

Const foo_finder = R.find( R.propEq( ‘prop’, ‘foo’ ) )

...

Const a_foo = foo_finder( a_list )

enlyth
See this is one of the things I personally hate, accessing or evaluating properties on objects using a string.

We have code so use code which can be parsed, evaluated by the type checker, and so on. What if you mistype 'fop' instead of 'foo'?

onion2k
Most of the time, you don't need any of this, it just makes the codebase unreadable, and hard for new people to join the project and be productive in a timely fashion.

This is exactly how I felt when I inherited a big project that uses lodash/fp. Having spent ~6 months with the code now I prefer having a functional layer on top of JS. It does make sense.

slifin
3. transducers
tobr
This. (No pun intended.) Lodash had its FP features bolted on as an afterthought, whereas Ramda was designed for it from the start.
pnako
Because it's not that useful.

There is an contest organized by the International Conference on Functional Programming: https://en.wikipedia.org/wiki/ICFP_Programming_Contest

It was more or less designed to show the superiority of functional programming languages. Yet in that contest C++ has done better than OCaml or Haskell...

The FP crowd seems to be more active doing advocacy than writing code. Yes, we know, there is that one trading company using OCaml. It's such a niche language that they have to pretty much maintain the toolchain and standard library themselves. Meanwhile, plenty of more successful companies use C++, C# or Java with no problem.

If you want to convince someone of the superiority of FP, write a real killer application. A new browser to compete with Chrome, a video game that can dethrone Skyrim or the Witcher 3. Maybe a DBMS that's even better than PostgreSQL? Basically: show, don't talk.

iLemming
> A new browser to compete with Chrome

Firefox is written in Rust

> a video game that can dethrone Skyrim or the Witcher 3.

afaik latest "God Of War" is written in Rust

> Maybe a DBMS that's even better than PostgreSQL?

Datomic - Clojure, Mnesia, Riak, CouchDB - Erlang

Yeah, I know that Rust is not FP lang, it's imperative, but it does adhere to FP principles.

"it's not that useful"? Heh.

iLemming
Just because you don't see it, doesn't mean it's not happening.

- Have you ever heard about how Walmart handles Black Fridays?

- Do you even know what's behind Apple's payment system?

- You ever used Pandoc, Couchbase, Grammarly, CircleCI, Clubhouse.io, Pandora, Soundcloud, Spotify?

- Have you ever asked a question - what is an app like WhatsApp that was sold for $19 Billion runs on?

- or How Facebook fights spam, Cisco does malware detection or AT&T deals with abuse complains?

- How Clojure is used at NASA or how Microsoft uses Haskell?

Frankly, I don't even know what's there to debate about. Functional programming is already here and it's been used in the industry for quite awhile already and its usage growing in a quite steady pace. Almost every single programming language today has certain amount of FP idioms, either built-in or via libraries. So yeah, while you're sitting there, contemplating about if FP useful or not, people been building hundreds of awesome products.

pnako
I said: it's not _that_ useful. I did not say it's completely useless.

Every large (or even small) company has people writing stuff in Perl, Bash, Haskell, Ruby, Rust, VBA, Scala, Lua or what not. I've been that guy, too.

More often than not it is a distraction more than anything, and it ultimately ends up being rewritten in C++, Java or Python. I think there are some niches where it helps; OCaml has had some success with static analysis and proof assistants, or even with code generation projects like FFTW.

iLemming
Honestly, do you really think a company with only 35 engineers could build, scale and sell a product like WhatsUp for even a fraction of that amount but using C++, Java or Python? I seriously doubt that.

Look, I've seen both sides and I know this for sure (this isn't a mere opinion, this is a certain fact) - FP allows to build and maintain products using smaller teams.

You don't have to trust my word, do your research, google "companies using Clojure" (or Haskell, OCaml, Erlang, etc). You will see that either those companies are not too big, or the FP teams in large companies not very large. Skeptics often cite this fact, claiming it to be the proof that FP codebases don't scale to large teams. The truth is - you don't need a big team to build a successful product with FP language. And the number of startups using FP langs is steadily growing.

hevi_jos
Because it is not the best solution for most computer problems.

Simple as that.

I am a functional and OOP programmer myself. I find functional way more elegant for modeling most mathematical problems, but OOP way better at modeling real life things with states.

OOP and states introduce lots of problems and complexity, but the solution is not removing states, or a series of complex mathematical entelechies.

In fact "removing states" is not really removing them. It is creating new objects with static states on it. It makes it super hard to model real life.

(dynamic)States exist in real life. Temperature, pressure, height, volume, brightness, weight...

There are programmers that understand programs as a religion, they only program in one system ans believe it is the best thing in the world and everybody should be forced to use it. I feels sorry for them and the people that depend on them.

The solution will be new paradigms that are neither OOP nor FP.

yogthos
Having used FP for the past decades in many different domains that's huge news to me.
psychoslave
>(dynamic)States exist in real life. Temperature, pressure, height, volume, brightness, weight...

It's more like, states better match our more common way to model our sense-data. It's easier to grasp for us, but it doesn't mean it's the way that will cause provide the best desirable results.

If you take the example of mass in physic, most of the time it's perfectly fine to deal with it as an first class attribute of an object. But it's not how Higgs mechanism aboard the notion.

HALtheWise
One thing I haven't seen brought up in this thread yet is support for foreign language embeddability. For example, Python code is technically quite slow, but that often doesn't matter much because it is easy to write external functions in C or C++ that behave like normal Python functions. I imagine that it would be more difficult to embed C++ code into a language with strong functional gauruntees. In that sense, the performance of Python is close to the performance of the fastest language with the same paradigm, which is (with current compilers) better for imperative programs that are structurally more similar to how the CPU operates.
LandR
Good ideas take a long time to catch on in tech, regardless of how much anyone believes its a fast moving industry.
axilmar
Functional programming? absolutely 100% yes.

Functional programming languages? well, it depends on the problem.

If performance is an important issue, most of the time functional programming languages are a big 'no'.

DonaldPShimoda
I mean, Jane Street use OCaml for high-frequency trading, where latency can cost significant amounts of money. They seem to be perfectly happy with it.
pyrale
Well, if performance is really an issue, I guess anything that is not C, C++, rust or a thin wrapper over those languages or legacy libs written in the likes of Fortran won't be an option, so yes.

That being said, it seems like performance is not an issue for most of the code written these days, aside from not writing quadratic solutions for problems solvable in linear time.

If you use java, .net, go or the likes, odds are you would be fine with a functional language ; and if you need performance with those languages, odds are that you will need arcane knowledge equivalent to what you would need to know to make performant fp code.

AlexandrB
This video exists in an alternate reality where marketing departments do not. Many of these languages became popular not because of some intrinsic property, but because of a strong marketing push by one or more backing companies. There was a period of time, not so long ago, where "object oriented" was basically a checkbox on language marketing copy, and if your language didn't have it you would get scoffed at.
WhompingWindows
Can you give contemporaneous examples? I'm curious if there are companies out there actively pushing their language/framework, as opposed to sort of passively posting/updating without marketing.
yr1337
MS .Net
stkdump
The lines between marketing and "passively posting" are very blurry, especially in the tech world. Is Linus T. marketing, when in a technical presentation about git he says "if you like SVN you might wanna leave"? Is Mozilla marketing, when they claim in technical blog posts that rust provides system language level performance while being safe?
psychoslave
Did you actually even skimmed the video? It does speak about large cash injection on the marketing side.
hootbootscoot
OO is merely a style of laying out imperative code for the compiler to put into proper linear order.

End of the day there are instructions emitted in a linear fashion and other instructions running (OS?) can provide an execution context ("Hi process, here's your memory area with pretend addresses do you can think it's all yours, you go over there to Core-2 and run at Z priority.")

OO is not particularly easy to learn WRT FP, but it does have the contextual advantage of having been delivered on the back of FAST compiled languages like C++.

Java runs as fast as the big money thrown into it's VM can make it run. If the JVM were dog-slow, you would see lower adoption of it.

Ocaml as used by corps like Jane Street etc,is not directly for running application code from (or rather, the application in question does code generation.)

High level languages could be expected to adopt either code-generational or Cython-style approaches. (Chicken Scheme, for example)

C++ merely has the historical accident of bridging 2 generational worlds of computing, hence you can find C++ full-stack.

Anyone for doing DSP purely in Ruby with no C-libs?

phtrivier
Because no-one agrees on what is "Functionnal Programming".

Plenty of popular languages today let you treat functions as data that you can pass around, compose, etc... Is that FP ? Or does only LISP count, because you neeeed to blend code and data ? And if so, what for ?

Plenty of popular languages let you use generic ADT, union types, etc... Is that FP ? Or does only idriss count because you neeeed dependent types and whatnot ? And if so, what for ?

Plenty of popular languages let you use immutable data structures. Is that FP ? Or does only clojure count, because you neeeeeeed complete immutability all the time ?

I don't care about "FP".

Closures are neat. Promise are neat too. Are they "FP-enough" ? Is async / await FP ? Are go channels FP ? Cause they're surely seems useful.

map / reduce is neat. Types systems are neat too. I'd rather have ML type system than C one, but I'd rather have C type system than LISP one. Sorry. I'm probably a bad human being.

Please, let's go back to building stuff.

gfs78
It´s basically because of marketing, fads, hype but we also have to take into account that FP is probably ok for the Hacker News audience but way too complex for the average developer.

Most code is LOB apps and social media apps churned by software factories and internal IS/IT areas. In these kind of projects coding is a rite of passage before becoming a team leader or a project manager, so most devs won´t invest much in their coding skills. As a result the average code tends to be badly decomposed procedural code over a procedural-like class hierarchy and devs just follow the fads because this is what gets them jobs.

Adding FP to this formula could prove really wrong for those in charge of projects. Better to be conservative and use Java, C#, Python or even nodejs/JavaScript as they allow to churn the same procedural code of ever just in different clothes.

vkaku
Because simple is better than perfect.
iLemming
Well, funny you said that. Clojure people put simplicity above everything else https://www.youtube.com/watch?v=34_L7t7fD_U
FpUser
I got another question. Why don't those prophets leave intelligent people alone and let them use whatever tooling/approach they find appropriate for solving particular problem instead of heating the air trying to propagate/force whatever ideology they carry.
JustSomeNobody
Because those profits don't think your definition of appropriate in this context is correct.
BrissyCoder
People at the places I work keep memeing links to blog posts along the lines of "OOP is dead. Functional programming is the new king".

Yet to see a single line of a functional language in production.

As other commenters have mentioned most decent modern lanuages are multi-paradigm.

fheld
might be interesting for you, Lips in production:

https://tech.grammarly.com/blog/running-lisp-in-production

juki
Notice that they are using Common Lisp, which is a multi-paradigm language, rather than a more functional lisp like Clojure.
TurboHaskal
They also use Clojure at Grammarly AFAIK.
Grue3
Common Lisp is a great language for OOP programming though. Almost any serious CL code heavily uses CLOS.
capdeck
I used to work in a company that had part of the process written in Lisp and it was in true production. Once the (fp) guy left the company everyone else had to support that code. What a nightmare that was. No one wanted to touch it with a ten foot pole. Should we had another FP guru in our midst, that may have turned out differently. But everyone was in agreement that that part needs to be rewritten in a language that everyone else is using. In real life - if most stuff in your company is FP and there is plenty of expertise to go around - do FP. If not - do not. :-)
Qwertystop
Arguably, if you knew that A: there was code in Lisp and B: only one person knew how to support it, they should have either rewritten it while the one who understood it was still there, or had more people learn Lisp, or hired more people who knew Lisp. It shouldn't have been allowed to reach the point where someone quit without anyone else having a clue.
yogthos
I guess you haven't really looked too hard then https://clojure.org/community/companies
Ragib_Zaman
Jane Street is OCaml basically from top to bottom. They seem pretty fast.
pgustafs
The backend for HN is written in a Lisp dialect...
schainks
In case you missed this: https://twitter.com/guieevc/status/1002494428748140544
mbo
Functional programming at Atlassian: https://youtu.be/HSQ9ET0bOYg
McWobbleston
Jet.com

https://medium.com/jettech

turk73
We have tons of Functional code and it's growing every day. What is hard is hiring developers who have a decent background in it because there are so many maintenance and legacy systems out there and so many lazy developers who aren't staying current.

I will tell you, I typically don't hire a Java developer who hasn't done any streams programming at the very least. And that's not even really FP, but if someone can't understand immutability, lambdas, predicates, functions as first class objects, and how to solve coding problems in this fashion, then they're of little use because we're working with frameworks now that assume you know all this.

People get very upset when you tell them that the OOP patterns they learned are less useful these days.

mplanchard
There is lots of functional language code in production.

As a recent example GitHub uses this Haskell application for code analysis: https://github.com/github/semantic

Also, Erlang powers a huge amount of the US’ cellular infrastructure, as well as RabbitMQ, which is used in a ton of production workloads.

There’s actually a pretty decent list on Wikipedia: https://en.m.wikipedia.org/wiki/Functional_programming

rafaelvasco
I feel it's because functional programming is not a general application methodology. It excels at several idioms that we use as programmers but I feel it's something more specialized and niche then OO programming. I personally use OO as a base and iterate from there, using Functional idioms where is applicable. Some classes of programs can be described in its entirety in functional terms, but they're a small portion compared to the whole. Everything can be expressed in OO idiom, even if it's not the optimum way; I don't know if the same can be said of Functional. Would like to know more;
iLemming
> Everything can be expressed in OO idiom

You know, at some point in history most scientists in Europe believed that the math can only be done using Roman numerals.

yogthos
I've been working with FP for around a decade now, and anything you can express in OO can be expressed with FP just as easily and often in a much simpler and more easily maintainable fashion. I copresented a talk on how my team uses FP in our platform if you're interested in more details:

https://www.youtube.com/watch?v=IekPZpfbdaI

didibus
Do not try and make the computer functional. That's impossible. Instead, only realize the truth... THERE IS NO COMPUTER. Then you will see that it not the computer that is functional, it is yourself.
Sniffnoy
Hm, so, Feldman claims that it wasn't the OO that made C++ popular, but rather the other features added on top of it that C with Classes didn't have. But one of those features that C with Classes didn't have was virtual functions. Without that, it's not clear how OO C with Classes really is. This potentially undermines Feldman's argument, because he hasn't ruled out the possibility that virtual functions were one of the key factors in C++'s success.
typedef_struct
These things are not all-or-nothing.

I have one compute device, the GPU, that I program with its language (eg GLSL, OpenCL).

I have another compute device, the CPU, that I program with its language (eg C, C++).

I have code to control these devices, that mostly handles scheduling and waiting on the results of these computations (as well as network traffic and user input), and I program that in a language that supports functional style (eg C#, TypeScript).

dzonga
as someone who made my first SPA, using Elm & met the presenter once in NY. functional programming is elegant and nice. but practicality is another different matter. for it, to be the norm, it has to have 10x advantages over the status quo. on the frontend, part JS already offers some features of functional programming, with some imperative parts. On the backend, none of the functional languages you would want to use such as Ocaml match say the ecosystems of Python, Java, Node.js & .Net world.Hell even F# which is an excellent language, is treated like the bastard step son of Microsoft
ummonk
I mean, on the backend functional programming usually really shines, since you're can implement your APIs using FP without needing to have any state (the FP will just be translating http requests into calls to stateful storage).

E.g. at Facebook, the PHP code I write is usually highly functional.

iLemming
Clojure & Clojurescript are very practical and pragmatic. I've been writing Javascript for quite long time, I have tried and used most *script languages: Coffescript, Typescript, Livescript, Gorillascript. I've looked into Elm, ReasonML and Purescript. Clojurescript today has is the most balanced one - it gives you real productivity boost, simple FFI, different bundling options, gradual typing and generative testing, code that is clean, concise and very easy to reason about. It is a shame people dismiss it for some dogmatic reasons even without given it a try.
flowerlad
Functional programming is not new, it has been around for many decades. The reason it didn't catch on is because it doesn't map very well to how our brain works. Human brains are object oriented, so OOP is very easy to grasp.

The real question is, why are people now taking a second look at functional programming. And the answer is Moore's law. Moore's law is coming to and end, and CPUs are not getting faster. Instead they are adding more and more cores. To take advantage of lots of cores you need concurrency. OOP is not very concurrency-friendly because objects have state, and to avoid corrupting state in a multi-threaded environment you need locks, and locks reduce concurrency. Functional programming doesn't have state, so you don't need locks, so you can get better concurrency.

cuddlecake
> Human brains are object oriented, so OOP is very easy to grasp.

Can I cite you on this? Because I have only ever seen this explained in Programming 101, where Java is the language they teach.

I wonder where this sentiment comes from. I imagine it came from marketing.

iLemming
> Can I cite you on this?

No, you can't. Because like the other commenter noted: "This is utter rubbish." It only looks easy to understand on the surface, but quickly becomes a mess. "spaghetti code" and "lasagna code" are the terms invented in OOP realm. Being said that - some advanced FP concepts can be pretty daunting to grasp as well.

Saying that human brains are OOP or FP oriented is equivalent to saying that human brains wired to recognize patterns in music but not color, or something like that.

McWobbleston
Yeah I've gotta be honest the first few times I was taught OOP I couldn't quite grasp the purpose. I like it now for encapsulation of state, but generally I find it much easier to deal with records + pure functions as building blocks.
johnisgood
Why would you need functional programming for that? Ada has extremely easy-to-use language constructs for concurrency.

Just to dive into Ada/SPARK: https://docs.adacore.com/spark2014-docs/html/ug/en/source/co...

ychen306
> Functional programming doesn't have state, so you don't need locks, so you can get better concurrency.

This is not true.

Many algorithms are intrinsically imperative (e.g., quicksort). You can represent it using some monads in Haskell to hide this, but in the end your code is still imperative; and if you want to parallelize it, you still have to think about synchronization.

yogthos
That's utter rubbish. My team has been working with Clojure for close to a decade now. We regularly hire coop students from university, and none of them have ever had problems learning functional programming. It typically takes around a couple of weeks for a student to become productive and start writing useful code. The only people I've ever met who say that FP doesn't map to the way our brains work are people who're deeply invested in imperative style and never tried anything else.

The reasons for the imperative style being dominant are largely historical. Back in the day we had single core machines with limited memory and very slow drives. Imperative style and mutability makes a lot of sense in this scenario. Today, the problem of squeezing out every last bit of performance from a single core is not the most interesting one. And we're naturally seeing more and more FP used in the wild because it's a better fit for modern problems.

flowerlad
I think you can do better than calling something you disagree with “rubbish” because your team didn’t have problems with it.

Here’s an example of people finding functional programming unnatural, maybe you can leverage your experience to explain why he is wrong:

Functional Programming Is Not Popular Because It Is Weird https://probablydance.com/2016/02/27/functional-programming-...

azhu
I would venture a guess to say that what makes FP or declarative style programming/thinking feel weird is not any great context like the nature of the human brain but rather the lesser one that usually people try to learn it after already having had learned imperative style stuff.

The functionally written recipe from https://probablydance.com/2016/02/27/functional-programming-... may be less helpful if I need to know exactly what steps to take to bake a cake, but it will actually be much more helpful if I want to know what a baked cake is. It isn't quite a fair example because it leverages how humans already know what a baked cake is, what a preheated oven is, etc and the clunkiness of the FP-style recipe is likely more due to that than anything fundamental to FP.

Let's try a different example that better maps to real world application logic. The task is to build a scootybooty.

Imperatively, a scootybooty program is:

- Acquire four wheels and two axels.

- Chop down a tree.

- Plane wood from tree into curved flat shape.

- Attach axels to convex side of planed wood shape.

- Attach wheels to axel.

Declaratively it is:

- A scootybooty is a planed plank of wood with two trucks.

- A planed plank of wood is a flat board.

- A truck is an axel with two wheels.

Now imagine your boss asks you wtf this scootybooty thing is and what it can do. Which program more quickly allows you to answer these questions? My favorite thing about the FP/declarative paradigm is that the mental model first-classes the abstract thing you are implementing above how you implement it. Imperative style encourages you to think about the steps it takes to do something moreso than the thing itself which IMO can lead to cart-before-horse type mistakes in planning. Declarative programming: "the forest is made of many trees", imperative programming: "tree, tree, tree, tree, tree, tree..."

yogthos
It's not just my team, there are many of people working with Clojure and other functional languages out there, and there are plenty of FP projects in productions in pretty much every domain.

Functional programming is weird in the same way Japanese is weird to an Anglophone. A person who learned Japanese as their mother tongue will find English equally weird. The comments in the link you posted already address the points the author tries to make, which all boil down to FP being different from what they're used to.

iLemming
Have you actually tried it? I dunno, after using Clojure for a while I realize now - there's nothing weirder than having to dig into deep, nested hierarchies of Java classes. I simply don't understand anymore people who willingly write that kind of code and even claim to be happy and productive.
asutekku
It might be nice to some but man if you are not required to learn a dozen different paradigms to use it. It’s expensive (harder to teach) and not intuitive for a beginner.
zygimantasdev
Maybe Elm itself is a killer app, but certainly not elm-ui. I don't think Datomic is a killer app too. Its certainly not comparable to Rails/OSes at scale
vim-guru
Sorry, but I think you need to take another look at Datomic
mac01021
Datomic is really neat but, for any system that operates at significant scale, I think the approximately-10-billion-datom capacity is probably too great a concern.

For example: Stop&Shop has 415 stores, and

  365 days * 415 stores * 100 purchases per day * 50 datoms per purchase
will fill up your system in 14 years without even spending datoms on inventory and the like. And that "100 purchases per day" could be low by a factor of 5 or 10 (I don't know).
dustingetz
Datomic shards naturally though – Datomic queries are functions that take database values as input. You can pass multiple database values as input and join across them. This is a first-class construct. I don't remember if this works in Cloud but it definitely works in onprem.
hosh
Part of Python's strength is in its ecosystem related to machine learning. That, and it is a popular language to teach to kids these days.
pbreit
To this newbie, procedural is ===SO=== much easier to understand.
iLemming
Actually it's the other way around. For someone who is not exposed to programming at all, it is much easier to pick up a language like Clojure. This is not merely my opinion - I have seen it multiple times, with different people.

Julie Moronuki who never had any exposure to programming at all and has a degree in linguistics decided to learn Haskell as her first programming language, just as an experiment. Not only she did manage to learn Haskell and become an expert, she co-authored one of the best selling Haskell books. I remember her saying that after Haskell other (more traditional) languages looked extremely confusing and weird to her.

alisiddiq
I only had experience of coding in Matlab in university, and started learning Clojure in my first job. It was very intuitive.
hootbootscoot
So, I DO believe that this earnest high-level programmer is very earnest, I just don't think that he is starting with a full deck of cards. The manner in which he quickly brings up "C" and it's "killer app" being systems programming, and then jumps into the Javascript morass, it sort of suggests that he should start with first principles on how computers function.

Computers are imperative devices. I don't think that a for-loop or a map function fundamentally impede understanding of this concept. I DO think that pretending that languages that run on top of Virtual Machines need to aknowledge their dependency heirarchy and stop attempting to "level/equalize" the languages in question. One would use C in order to write a VM like V8 that then could run your scripting language. The core of Autocad is surely C++ with some lower-level C (and possibly assembler) regardless of whichever scripting language has then been implemented inside of this codebase, again, on a virtual machine.

The Operating System is a virtual machine. The memory management subsystem is a virtual machine.

Javascript runs on browsers. (Or V8, but then that was originally the JS engine of a browser) and has inherent flaws (lack of type system, for one) that limit it's use in specifying/driving code generation that could provide lower level functionality. THAT is the essential issue. VHDL and Verilog can specify digital logic up to a certain level of abstraction. C++ and C code generation frameworks can be used to generate HDL code to some degree, to the degree that libraries make them aware of the lower-level constructs such HDL's work in. I have no doubt that Pythons MyHDL presents a very low learning curve in terms of having the Python interface, but then the developer needs to be aware of what sort of HDL MyHDL will output and how it will actually perform in synthesis and on a real FPGA.

We don't need MORE layers of opaque abstraction. People need to learn more about how computers work as abstraction doesn't obviate the need to know how the lower levels work in order to optimize ones higher level code.

I can provide specific examples regarding libraries that purport to provide a somewhat blackbox interface, but upon deeper examination DO, in fact, require intimate knowledge of what is inside.

Abstractions are imperfect human attempts to separate concerns and they are temporary and social membranes.

Now, having said all of this: If a person ran a Symbolics Lisp system, such a system was holistic and the higher-level Lisp programmer could drill down into anything in the system and modify it or see how it was made.

I digress... read the source code for any magical black boxes you are thinking of employing in your work.

mac01021
Some of those sources are pretty daunting...
hootbootscoot
indeed... hence the desire for blackboxes.

leaky abstractions require the occasional lid-lifting... and all abstractions have a tendency to leak somewhere or other, especially if they attempt to be all encompassing.

I think FP is certainly a viable high-level specification but ultimately there is lower-level code 'getting stuff done' (lol, "side effects") One has to be at least roughly aware of HOW ones specification is getting implemented in order to solve problems that arise and in order to optimize.

This is all the more compelling reason to cease this relentless push to "cram more stuff down the tubes" or "add more layers to the stack"

I honestly think that we need to return to KIS/KISS (keeping it simple)

SIMPLIFY and remove extraneous stuff that prevents one from having a total mental model of what is happening.

pts_
Because it is difficult and funding flows to workflows which treat software like brick laying.
sheeshkebab
‘cause software development is as far away from math as plumbing. there, don’t need to watch the video anymore.
austincheney
I cannot watch the video as youtube is blocked at my office. But I can answer the premise.

OO is the norm because it is has immediate business value and is easier to teach to young people. Most programmers in the work place are produced from educational institutions. Educational institutions has competitive quantifiable

FP requires thinking in terms of calculus. This isn't hard, personally I find it much faster and easier. Thinking in calculus does require some maturity, and possibly some analytical experience, young students may not find comfortable.

---

This question can also be answered in terms of scale.

FP reinforces simplicity. Simplicity requires extra effort, often through refactoring, in order to scale or allow extension for future requirements. This is a mature approach that allows a clearer path forward during maintenance and enhancements, but it isn't free.

OO scales immediately with minimal effort. OO, particularly inheritance, strongly reinforces complexity, but scale is easily and immediately available. This is great until it isn't.

_han
The top comment on YouTube raises a valid point:

> I've programmed both functional and non-functional (not necessarily OO) programming languages for ~2 decades now. This misses the point. Even if functional programming helps you reason about ADTs and data flow, monads, etc, it has the opposite effect for helping you reason about what the machine is doing. You have no control over execution, memory layout, garbage collection, you name it. FP will always occupy a niche because of where it sits in the abstraction hierarchy. I'm a real time graphics programmer and if I can't mentally map (in rough terms, specific if necessary) what assembly my code is going to generate, the language is a non-starter. This is true for any company at scale. FP can be used at the fringe or the edge, but the core part demands efficiency.

js8
> FP will always occupy a niche because of where it sits in the abstraction hierarchy

At some point in history, people stopped worrying about not understanding compilers, how they allocate registers and handle loops and do low-level optimizations. The compilers (and languages like C or C++) became good enough (or even better than humans in many cases) in optimizing code.

The same happened with managed memory and databases, and it will happen here, too. Compilers with FP will become good enough in translating to the machine code so that almost nobody will really care that much.

The overall historical trend of programming is more/better abstractions for humans and better automated tools to translate these abstractions into performant code.

chubot
Programming is has grown so much as a field that generalizations like this rarely capture the truth.

It's true that in many domains, people care much less about performance than they used to.

At the same time, other people care a lot more about performance. Programming is just big and diverse.

The end of single score scaling is one big reason it's more important than ever.

Another reason is simply that a lot more people use computers now, and supporting them takes a lot of server resources. In the 90's there were maybe 10M or 100M people using a web site. Now there are 100M or 1B people using it.

I think there's (rightly) a resurgence in "performance culture" just because of these two things and others. CppCon is a good conference to watch on YouTube if you want to see what people who care about performance are thinking about.

----

If you're writing a web app, you might not think that much about performance, or to be fair it's not in your company's economics to encourage you to think about it.

But look at the hoops that browser engineers jump through to make that possible! They're using techniques that weren't deployed 10 years ago, let alone 20 or 30 years ago.

Somebody has to do all of that work. That's what I mean by computing being more diverse -- the "spread" is wider.

yogthos
The point here is that for many domains performance is not the top consideration. And it's also worth noting that it's perfectly possible to tune applications written in FP languages to get very good performance. It's also possible to identify the parts of the code that are performance critical and implement those using imperative style. This is especially easy to do with Clojure where you have direct access to Java.

So, yeah if you're working in a niche domain where raw performance is the dominant concern, then you should absolutely use a language that optimizes for that. However, in a general case using FP language will work just fine.

chubot
Sure I get that, but I'm saying your statement about "the overall historical trend" is wrong, or at least fails to capture a large part of the truth.

At some point in history, people stopped worrying about not understanding compilers

This part is misleading too -- I would say there is a renaissance in compiler technology now. For the first 10 years of my career I heard little about compilers, but in the last 10, JS Engines like v8 and Spidermonkey, and AOT compiler tech like LLVM and MLIR have changed that.

The overall historical trend is that computing is getting used a lot more. So you have growth on both ends: more people using high level languages, and more people caring about performance.

It's not either/or -- that's a misleading way of thinking about it. The trend is more "spread", not everyone going "up" the stack. There will always be more people up the stack because lower layers inherently provide leverage, but that doesn't mean the lower layers don't exist or aren't relevant.

And lots of people migrate "down" the stack during their careers -- generally people who are trying to build something novel "up stack" and can't do it with what exists "down there".

Supermancho
Once parallel execution became part of every design discussion that had a performance concern, the vast majority of programmers stopped caring (and consequently talking) about compilers. Who cares if you can do 36k requests/s to my 18k if I can do 36k across 2 or 3 machines? I pass that on to the customer. Why try to hire for or wait around for an optimization trick to double performance (that will likely never be realized or discovered) when there's business to be done? The post-optimization is pure profit and can be quantified, so might as well wait and let some specialist (who does care about compilers) handle it if the product ends up being valuable enough to hire them. This is how development and hiring works today.

> At some point in history, people stopped worrying about not understanding compilers

> This part is misleading too

Not in the least. Interpreting that to mean "all people stopped worrying" is deliberate misinterpretation.

blub
You're simply used to working in environments where technical excellence doesn't exist. In such environments performance is not a big concern, but neither is quality in general...
BoiledCabbage
Over-indexing on performance has nothing to do with technical excellence. It having the the wrong priorities.

Saying people who don't optimize for performance don't have technical excellence is just like saying people who don't get all of their program to fit into 32kb don't have technical excellence.

Yes it requires skills to get a program to run in such a small amount of space, just like it takes skill to perform detailed performance optimizations. But in either case if that's not your job you're wasting time and someone else's time, even if it makes you happy to do so.

A product is designed to serve a purpose, if instead of working on that purpose is developer is squeezing out a few additional cycles of perf or a few additional kb of memory they have the wrong priorities.

No that doesn't mean go to the other extreme, but choosing to not spending unnecessary time on performance or size optimization is entirely unrelated to technical excellence. And any senior engineer knows this.

blub
Except OP said quite clearly "stopped caring" and "no one cares". Bit of a stretch from that to your over-focusing...
jcelerier
> At some point in history, people stopped worrying

Did they ? Because I keep seeing people around me who want to get into FPGA programming because they aren't getting enough juice from their computers. Sure, if you're making $businessapp you don't care but there is a large segment of people who really really really really want things to go faster, with no limits - content creation, games, finance... hell, I'd sell a rib for a compiler that goes a few times faster :-)

asjw
Large as in 1 out of 1.000

The point is that to be mainstream it's enough to be used by one major app store

How many apps care about FPGA or what compilers do, given that they don't even know what the underlying OS does or when and why memory is allocated?

I work in finance, even there performances are for niche projects, the bulk of the job is replacing excel macros with something slightly less 90s style.

spamizbad
Fun fact: C used to be considered a high-level language. Now everyone talks about it being "close to metal" which to olds like me is a dead give-away the person either doesn't know C or doesn't know the "metal". Most of the stuff people think of as being the "metal" in C are, in many cases, virtual abstractions created by the operating system. Embedded development w/o dynamic memory allocation less so... but that's not what most people are talking about.
nineteen999
Well it depends on what side of the kernel/userspace boundary you are talking about doesn't it.

While C for userland programs may need to conform to the operating system's libc and system call interface abstractions, on the other side of the syscall boundary is C code (ie. the kernel) that is indeed very "close to the metal".

pjmlp
Except that C's abstract machine is closer to a PDP-11 than an modern i7/ARM are doing.

So unless you are doing PIC programming, that "close to the metal" is very far away.

nineteen999
Running any code in the CPU's most privileged ring, regardless of language, is going to give you access to directly programming the MMU, scheduling processes across multiple CPU's, control over caches to a fairly large extent, and the ability to control every device in the system. A large amount of this is achieved by bit-banging otherwise inaccessible registers in the CPU itself via assembly (ie. in the case of the MMU, GDT and IDT for x86), or via MMIO for modern busses/devices. The language doesn't necessarily "need" to have a complex model of the entire physical machine to be able to achieve all of those things. How much closer to the metal do you want to be?

You really want your programming language to have innate constructs for directly controlling the baggage the x86 CPU (or any other for that matter) brings with it? I don't.

You also want kernel code to be performant (ie. compiled by a decently optimizing compiler, of which there are many for C), allow you to disable garbage collection or be totally free of it so you can explicitly manage separate pools of memory. C ticks all those boxes which is why its still the most dominant and widespread language for OS kernel development nearly half a century since UNIX was first rewritten in C, and will be for years to come, like it or loathe it, and despite there being much more modern contenders (eg. Rust) which don't have the momentum yet.

pjmlp
C doesn't tick any box regarding:

- vector execution units

- out of order execution

- delay slots

- L1 and L2 explicit cache access

- MMU access

- register windows

- gpgpu

All of that is given access by Assembly opcodes, not C specific language features.

And if you going to refer to language extensions to ISO C for writing inline Assembly, or compiler intrinsics, well the first OS written only in high level language with compiler intrinsics was done 10 years before C existed and is still being sold by Unisys.

The only thing that C has going for it are religious embedded devs that won't touch anything else other than C89 (yep not even C99), or FOSS UNIX clones.

And yeah, thanks to those folks, the Linux Kernel Security summit will have plenty of material for future conferences.

jstimpfle
> And yeah, thanks to those folks, the Linux Kernel Security summit will have plenty of material for future conferences.

In the meantime, did you find a memory leak in my code? https://news.ycombinator.com/item?id=21275440

Not that I want to vehemently disagree with your security statements, but I think I'd love to have a little bit more "show" and less "tell". That also applies to showing practicality of managed languages, practicality of 90's business software development (C++/COM), practicality of dead commercial languages (Delphi + VCL).

Giving just endless lists of ancient buzzwords doesn't help.

pjmlp
It is coming for sure, I have not forgoten about it, I just have a private life to take care of, you know?

Regarding show, don't tell.

The 21st century React Native for Windows is written on top of COM/C++,

https://github.com/microsoft/react-native-windows

https://www.youtube.com/watch?v=IUMWFExtDSg

We are having a Delphi conference in upcoming weeks, https://entwickler-konferenz.de/, and it gets regularly featured on the German press, https://www.dotnetpro.de/delphi-959606.html.

jstimpfle
> It is coming for sure, I have not forgoten about it, I just have a private life to take care of, you know?

I was thinking you'd look at it before writing your next 25 comments, but it seems I was wrong. So I'll just wait, it's fine.

> The 21st century React Native for Windows is written on top of COM/C++

From a skim I could find exactly zero mentions of COM/C++ stuff in there. Sure, this RN might sit on a pile of stuff that has COM buried underneath. That doesn't mean that COM is a necessity to do this React stuff, and not even that it's a good design from a developer's perspective.

You give zero ideas what's a good idea about COM. Just buzzwords and links to stuff and more stuff, with no relation obvious to me.

If you actually have to go through the whole COM boilerplate and the abominations to build a project with COM, just to connect to a service, because some people thought it wasn't necessary to provide a simple API (connect()/disconnect()/read_next_event()) then the whole thing isn't so funny anymore.

pjmlp
ReactNative for Windows uses WinUI and XAML Islands, which is UWP, aka COM.

I really don't know what kind of COM you have been writing, because COM from VCL, MFC, ATL, UWP, Delphi, .NET surely doesn't fulfill that description.

As for what COM is good for,

"Component Software: Beyond Object-Oriented Programming"

https://www.amazon.com/Component-Software-Object-Oriented-Pr...

jstimpfle
Maybe I was unclear, but it was a C++ program (dealing with macros, as I said - VARIANTS and DISP_IDs and PROPERTIES and stuff). No joy to use.

As for other languages, I haven't touched COM at all but the idea of making GUIDs for stuff and registering components in the operating system doesn't seem a good default approach to me. Pretty sure it's more reliable to link object files together by default, so you can control and change what you get without the bureaucracy of versioning, etc.

> ReactNative for Windows uses WinUI and XAML Islands, which is UWP, aka COM.

Is the fact that COM is buried under this pile more than an unfortunate implementation detail?

nineteen999
Which modern, portable language gives you direct control over the MMU, out of order execution, delay slots and explicit cache access, other than hardware specific assembler? None that I know of can do this in a hardware agnostic way. Do tell.

I clearly mentioned that assembler was required for much of this, where components aren't programmed by MMIO. This would be the same regardless of whether you used Rust, Go, or FORTRAN77 to write your kernel.

I'm not even going to bother with your security comments, we all agree by now. There are plenty of people using C99 in embedded at least in userspace, even the Linux kernel uses some C99 extensions (eg. --std=gnu89 with gcc), and those FOSS UNIX clones have taken over the planet at this point in terms of smartphone adoption, data center servers etc. Despite the obvious flaws, this is still a better world to live in than Microsoft's proposed monoculture of the 1990's.

pjmlp
None, including C, which makes it nothing special. Any compiled language can call into Assembly.

The phones I know as having taken over the world run on C++, Objective-C, Swift, Java, with very little C still left around, and with its area being reduced with each OS release.

As for data centers, there is a certain irony that on Azure those FOSS run on top of Hyper-V, written in C++, on Google Cloud run on top of gVisor written in Go, on Amazon on top of Firecracker written in Rust, and on ChromeOS containers written in a mix of Go/Rust.

jstimpfle
> None, including C, which makes it nothing special. Any compiled language can call into Assembly.

I wish you joy and entertainment interfacing your managed data structures with assembly code.

Yesterday I was forced to look into COM for the first time. There was some kind of callback that I was interested in, and it had basically two arrays as arguments, only in a super abstract from. I'm not lying, it was 30 lines of code before the function could actually access the elements in the arrays (with special "safe" function calls to get/set data).

Of course, that stupid callback had to be wrapped as a method in a class, and had to be statically declared as a callback with a special macro that does member pointer hackery, and that has to be wrapped in some more BEGIN_MAP/END_MAP (or so) macros. Oh yeah, and don't forget to list these declarations in the right order.

Thanks, but that's not why I wanted to become a programmer.

pjmlp
I have done it multiple times in the past calling Assembly via JNI, from Oberon, from Turbo Basic, from Turbo Pascal, from Delphi, from .NET.

C is not a special snowflake.

nineteen999
> Any compiled language can call into Assembly.

So ... you're repeating what I already said.

Android -> Linux -> mostly C, and assembly

IOS -> Darwin -> XNU -> C/C++, and assembly

Hyper-V runs as a Windows Server role. Windows kernel is C, and assembly

gVisor runs on Linux -> C, assembly

Firecracker runs on KVM, which runs on Linux -> C, assembly

In every single thing you have listed, the closest thing to the "bare metal" is C, and assembly. THAT's what makes C special. Its level of adoption, ubiquity and accessibility. Not its spectacular lack of security risks.

Anyway, you have come a very long way from where the parent poster started which was:

  Most of the stuff people think of as being the "metal" in 
  C are, in many cases, virtual abstractions created by the 
  operating system.
To which I merely pointed out, on the other side of the interface layer is, most commonly C. And assembly.

Operating Systems design has to and is obviously evolving away from this. I disagree that we have reached "peak C" and that is going to decline before it gets bigger.

Unfortunately pjmlp many of the conversations we have start this way, and devolve into this. I don't think I'm going to bother again. I think one (or both) of us will just have to agree to disagree. Have a nice day.

collyw
99% of the time you can forget about them, but when you get performance problems then you need to start digging in to what goes on behind the scenes.
whateveracct
Some engineers have terminal systems brain. That's a rude way for me to say it, but I have met engineers who feel the need to fully understand how code maps to hardware otherwise they don't feel comfortable.
jackhack
Some people stopped worrying about not understanding compilers. They're not working on drivers, realtime (esp where low lag & jitter are concerned such as motion control), or high performance software of all stripes, trying to squeeze the most out of the available hardware. It's all about choosing the right tool for the job, and there is no right tool for every job. A guy generating sales reports has very, very different needs from the lady writing a rendering engine.

Michael Abrash (graphics programmer extraordinaire) said it best, and I'll paraphrase: the best optimizing compiler is between your ears. The right algorithm beats the pants off the most optimized wrong algorithm. Or, as i like to say "there is nothing faster than nothing" -- finding a way to avoid a computation is the ultimate optimization.

And managed memory is wonderful, almost all the time. That is, just until the GC decides to do a big disposal and compaction right in the middle of a time-sensitive loop causing that thing that "always works" to break, unpredictably, due to a trigger based on memory pressure. Been there, done that. If it's a business report or a ETL, big deal. If it's a motor-control loop running equipment, your data or machinery is now trash.

For most of the programming world, and I count myself in this group, the highly abstracted stuff is great. Right up until the moment where something unexpected doesn't work then it turns in to a cargo cult performance because it's nearly all black-box below. Turtles, all the way down.

There is value in understanding the whole stack, even today.

Litmus2336
While I 100% agree with you, as a Java dev who is very interested in optimization (particularly register coloring) at a certain point I think you have to realize that any "compiler type" optimizations you do (Ooh, I'll optimize my variable declaration order to not spill to memory!) is just ignored and re optimized by most compilers worth their weight in salt. Therefore, it's totally counter productive. All the time spent worrying about GC lag is, IMO, wasted compared to other more productive things. I haven't programmed anything mechanical in over 6 years. Basically, for your average developer, while I highly recommend learning the whole stack, I don't believe the notion that understanding the whole stack will actually lead to tangible improvements for programmers. They'd be better served focusing purely on theory (and by theory I mean algorithms).

As a side note: I hate hardware, but I love graph algorithms, which is why I love register coloring so much :)

traderjane
Yes, the pickup truck has its uses, but when we talk about high-level vs low-level programming, are we debating about the sedan or the pickup truck?
kkarakk
well as always when discussing functional programming people are not discussing the merits of functional programming but rather the merits of basically throwing away objective programming in favor of functional programming. That is ofcourse complete nonsense but modern computer science politics like all modern politics only deals in absolutes
traderjane
High level and low level programming are inherently at odds in terms of how much burden they ask you to take on, and somebody is arguing that the ergonomic promises of functional programming comes from high level programming.

I'm arguing that low level programming is legitimate but is also a relatively small subset of productive programming, and hence the pickup truck metaphor.

BoiledCabbage
> Some people stopped worrying about not understanding compilers.

But no, it's not some people, it's not most people, it's 99%+ of all developers that stopped worrying about compilers. There will always be a use case for it, but when we're talking about < 1% of all developers we're really spending time talking about a niche.

There will always be niches in any industry, but we shouldn't design our industry/profession about niche cases.

classified
I regularly read the assembly output of the OCaml compiler I'm using and there are very few surprises. The mapping from source to generated assembly is quite straightforward. You couldn't say the same for Haskell, though. So it depends on which FP language you're using.
chowells
I absolutely can say that about Haskell. You have to actually learn the language, but it doesn't do anything unpredictable.
johnisgood
One of the reasons Jane Street picked OCaml over Haskell was Haskell having a much worse predictable performance, i.e. Haskell does do "something" unpredictable, or significantly less predictable that a quantitative trading firm did not want to put up with.

https://www.quora.com/Why-didnt-Jane-Street-use-Haskell

They also explain it in a video on YouTube.

carterschonwald
Actually I’ve found it pretty easy to track / reason about. But I guess I do have a decent mental model for how the compiler works
lonelappde
Wrong metric. It's not the mapping that matters, it's the assembly that matters.
skohan
Does OCaml give you enough tools to optimize around things like CPU-cache and memory management costs? It's one thing to know what kind of assembly is going to be produced by a block of code, but it's another thing to be able to get the machine to do exactly what you want.
classified
If “to get the machine to do exactly what you want” is an important goal, I'd recommend C or C++. Those are well known tools for those purposes. FP languages are rather about getting more ideas from math into how you structure your code.
ummonk
Rust on the other hand is zero-cost abstraction but pushes you to write functional code.
OskarS
I don’t think you need to go that low level. I find it much easier to optimize around things like memory layout, allocation patterns and cache effects even in Java and C# compared to (essentially) any functional language.

I think this is just a feature of imperative languages over functional ones. Functional languages are excellent for many things, but not for this stuff.

Bjartr
Where should I look to learn about doing those things in Java? Last time I tried looking for guidance I mostly found people saying it's not worth the effort since the JVM JIT will do better.
lonelappde
This thread is for people who are smarter than compilers. Java is not different from C in that regard.
pjmlp
You can analyze Assembly code just like in any other AOT compiled language.

Have a go at in Godbolt, https://godbolt.org/

OCaml also integrates with perf on Linux,

https://ocaml.org/learn/tutorials/performance_and_profiling....

Some performance tips from an old partially archived page.

https://hackerfall.com/story/writing-performance-sensitive-o...

And if you are feeling fancy, doing some pointer style programming

https://ocaml.org/learn/tutorials/pointers.html

agentultra
I too have been programming professionally for nearly two decades. Much longer if you consider the time I spent making door games, MUDs, and terrible games in the 90s.

I think functional programming gives you powerful tools to reason about the construction of programs. Even down to the machine level it's amazing how amortized functional data structures change the way you think about algorithmic complexity. I think laziness was the game changer here. And if you go all in with functional programming it's surprising how much baseline performance you can get with such little effort and how easy it is to scale to multiple cores and multiple hosts.

There are some things like vectorization that most functional languages I know of are hard pressed to take advantage of so we still reach out to C for those things.

However I think we're starting to learn enough about functional programming languages and how to make efficient compilers for them these days. Some interesting research that may be landing soon that has me excited would enable a completely pure program to do register and memory mutations under the hood, so to speak, in order to boost baseline performance. I don't think we're far off from seeing a dependently typed, pure, lazy functional language that can have bounded performance guarantees... and possibly be able to compile programs that don't even need run time support from a GC.

I grew up on an Amiga, and later IBM PCs, and that instinct to think about programs in terms of a program counter, registers, and memory is baked into me. It was hard to learn a completely different paradigm 18 or so years into my professional career. And to me, I think, that's the great accident that prevented FP from being the norm: several generations were simply not exposed to it early on on our personal computers. We had no idea it was out there until some of us went to university or the Internet came along. And even then... to really understand the breakthroughs FP has made requires quite a bit of learning and learning is hard. People don't like learning. I didn't. It's painful. But it's useful and worth it and I'm convinced that FP will come to be the norm if some project can manage to overcome the network effects and incumbents.

hootbootscoot
OTOH, think of the vast hordes of new developers exposed to lot's of FP and NOT having that background in Amiga and PC and bare-metal programming that you do.

FP has been largely introduced into the mainstream of programming through Javascript and Web Dev. Let that sink in.

End of the day, the computer is an imperative device, and your training helps you understand that.

FP is a perfectly viable high-level specification or code-generational approach, but you are aware of the leaky abstraction/blackish box underneath and how your code runs on it.

I see FP and the "infrastructure as code" movement as part and parcel to the same cool end reality goal, but I feel that our current industry weaknesses are related to hiding and running away from how our code actually executes. Across the board.

pjmlp
> FP has been largely introduced into the mainstream of programming through Javascript and Web Dev. Let that sink in.

Not really.

"Confessions Of A Used Programming Language Salesman, Getting the Masses Hooked on Haskell"

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.72....

AlchemistCamp
> FP has been largely introduced into the mainstream of programming through Javascript and Web Dev.

JavaScript's use of more and more functional patterns came with Underscore.js and CoffeeScript, which were both inspired by Ruby-based web dev!

I'd say the entire industry, Java included, has been moving towards more FP in a very sluggish fashion.

bernawil
Having first-class functions and closures was the can of worms. How much you can get done passing callbacks is what got me wondering how much more stuff is there to learn in FP.
eli_gottlieb
>End of the day, the computer is an imperative device, and your training helps you understand that.

Well... it's complicated. A CPU is imperative. An ALU is functional. A GPU is vectorized functional.

hootbootscoot
end of the day, some poor schmuck has to get up and DO something...lol
hootbootscoot
I suppose that since one is still only talking about the external interface to any given hw execution unit (gpu, alu, fpu) one could always present it in whatever format was useful or trendy.

But I'll contend that it's much more productive to basically wrap low-level functionality as modules that higher-level languages could compose. One could then optimize individual modules.

The mechanism of composition should lay it out as desired in memory for best efficiency, and hence the probably need for a layout step, presuming precompiled modules. (it could use 'ld', for example) i'm not sure how you would optimize memory layout for black-boxes, but perhaps some standard interface..

Most people here are doing this already without knowing it, if you look into the dependencies of your higher level programming tools and kit.

End of the day OOP is a code-organization technique. FP is too. They are both useful. We still have complexity. Some poster above needing actor models etc, depends upon the scale I suppose. If one is considering a distributed healthcare application, or is one trying to get audio/video not to glitch etc.

hootbootscoot
true, well that's complicated too, as that ALU likely runs microcode or has a lookup table, but presuming boolean hardware logic underlaying it somewhere, THAT level is declarative, not sure about what functional composition is involved here, but declarative programming of boolean hardware where the actual imperative activity is occurring.

maybe the physics is imperative too lol

socksy
"End of the day, the computer is an imperative device, and your training helps you understand that."

I mean... it's not though, is it? Some things happen synchronously, but this is not the same thing as being an imperative device. Almost every CPU out there is multi core these days, and GPUs absolutely don't work in an imperative manner, despite what a GLSL script looks like.

If we had changed the mainstream programming model years ago, perhaps chip manufacturers would have had more freedom to break free of the imperative mindset, and we could have radically different architectures by now?

cesarb
> but this is not the same thing as being an imperative device. Almost every CPU out there is multi core these days

The interface to the CPU is imperative. Each core (or thread for SMT) executes a sequence of instructions, one by one. Even with out-of-order and speculation, the instructions are executed as if they were executed one by one.

> and GPUs absolutely don't work in an imperative manner, despite what a GLSL script looks like.

They do. Each "core" of the GPU executes a sequence of instructions, one by one, but each instruction manipulates several separate copies of the state in parallel; the effect is like having several identical cores which operate in lockstep.

> If we had changed the mainstream programming model years ago, perhaps chip manufacturers would have had more freedom to break free of the imperative mindset, and we could have radically different architectures by now?

The cause and effect are in the opposite direction. The "imperative mindset" comes from the hardware. Even Lisp machines used imperative machine code (see https://en.wikipedia.org/wiki/Lisp_machine#Technical_overvie... for an example).

dragonwriter
> The interface to the CPU is imperative. Each core (or thread for SMT) executes a sequence of instructions, one by one. Even with out-of-order and speculation, the instructions are executed as if they were executed one by one.

That is, in the traditional model of declarative programming, the semantics given are guaranteed, but the actual order of operations are not. So, in a sense, the CPU takes what could be construed as imperative code, but treats it as declarative rather than imperative.

socksy
Exactly my point. With out of order execution, we execute as if they are in order, making sure that an item with a dependency on the outcome of another is executed in the correct order.

We end up having to rely heavily on compilers like LLVM which make boil down exactly what should depend on what, and how to best lay out the commands accordingly.

Imagine if the dominant programming style in the last few decades had been a declarative one. We wouldn't have had any of this nonsense about working out after the fact what depends on what, we could have been sending it right down to the CPU level so that it could deal with it.

smaddox
From wikipedia:

> In computer science, imperative programming is a programming paradigm that uses statements that change a program's state.

All CPU's I know of are definitely imperative. My (limited) understanding of GPU instruction sets is that they are fairly similar, except that they use all SIMD instructions.

hootbootscoot
gpu just means lots of cores. the cores are composed of execution units too.

even the most exotic architecure you can think of is imperative (systolic arrays, or transport triggered architecture or...whatever)

there are instructions and they are imperative.

I can vaguely rememeber some recent iterative AI of some kind who had to produce a functioning circuit to do XYZ, and the final netlist that it produced for the FPGA was so full of latches, taking advantage of weird timing skew in the FPGA fabric and stuff, and no engineer could understand the netlist as sensical, but the circuit worked... I suppose when there's that level of non-imperative design, you can truly call it both declarative, and magic.

hootbootscoot
nope. end of the day there is a linear sequence of instructions being executed by any given part of the hardware.|

OO and FP are just higher-level ways of organizing source code that gets reduced to a linear sequence of instructions for any given hardware execution unit.

hootbootscoot
hardware is imperative at it's lowest level. sure, hey you can even say that the instructions are declarative if you are speaking of the perspective of the ALU with regards to stuff you send to an FPU, for example...
agentultra
Individual cores execute instructions speculatively these days!

Predicting how the program will be executed, even in a language such as C99 or C11, requires several layers of abstraction.

What most programmers using these languages are concerned about is memory layout as that is the primary bottleneck these days. The same is true for developers of FP languages. Most of these languages I've seen have facilities for unboxing types and working with arrays as you do. It's a bit harder to squeeze the Haskell RTS onto a constrained platform which is where I'd either simply write in C... or better, compile a subset of Haskell without the RTS to a C program.

What I find neat though is that persistent structures, memoization, laziness, and referential transparency gave us a lot of expressive power while giving us a lot of performance out of the gate. In an analogous way to how modern CPU cores execute instructions speculatively while maintaining the promise of sequential access from the outside; these structures combined with pure, lazy run time allow us to speculatively memoize and persist computations for more efficient computations. This lets me write algorithms that can search infinite spaces using immutable structures and get the optimal algorithm for the average case since the data structures and lazy evaluation amortize the cost for me.

There's a good power-to-weight ratio there that, to me, we're only beginning to scratch the surface of.

pjmlp
I am fully on board with you.

Learned to code in the mid-80's, Basic and Z80 FTW.

Followed up by plenty of Assembly (Amiga/PC), and systems level stuff using Turbo Basic, Turbo Pascal, C++ (MS-DOS), TP and C++ (Windows), C++ (UNIX), and many other stuff.

I was lucky enough that my early 90's university has exposed us to Prolog, Lisp, Oberon (and its descendants), Caml Light, Standard ML, Miranda.

Additionally the university library allowed me to dive into a parallel universe from programming ideas that seldom reach mainstream.

Which was great, not only did I learn that it was possible to marry systems programming with GC enabled languages, it was also possible to be quite productive with FP languages.

Unfortunately this seems to be yet another area that others only believe on its possibilities after discovering it by themselves.

tomp
> Some interesting research that may be landing soon that has me excited would enable a completely pure program to do register and memory mutations under the hood, so to speak, in order to boost baseline performance. I don't think we're far off from seeing a dependently typed, pure, lazy functional language that can have bounded performance guarantees... and possibly be able to compile programs that don't even need run time support from a GC.

Is there any more info/links available about this?

agentultra
I don't think they've finished writing the paper yet but I'll post it out there when it gets published.
kls
I would agree with this, I came up in the same time period and we just programmed closer to the metal in that period, we did not have the layers and it was just normal to think in terms of the machines hardware (memory addresses, registers, interrupts, clock, etc.) This naturally leads to a procedural way of thinking, variables where a think veil over the actual memory they addressed.

It actually takes a lot of unlearning to let go of control of the machine and let it solve the problem, when you are used to telling it how to solve the problem. I came to that conclusion when I dabbled in ProLog just to learn something different, and I had a really hard time getting my head around CL when I first got into it, due to wanting to tell then machine exactly how to solve the problem. I think it was just ingrained in those of us that grew up closer to the metal and I think the Byte magazine reference, in the talk, has a lot to do with it, we just did not have that much exposure to other ideas, given that mags and Barns and Noble, where our only source to new ideas. That and most of us where kids just hacking on these things alone in our bedroom with no connectivity to anyone else.

I remember before the web getting on newsgroups and WAIS and thinking how much more info was available that the silo'ed BBS we used to dial into. Then the web hit and suddenly all of these other ideas gained a broader audience.

6gvONxR4sf7o
Just wait until this guy has to use something like SQL or Spark. They will always occupy a niche for exactly these reasons. Turns out it's a pretty big niche though. So big in fact, that maybe we shouldn't call it a niche.

Python's ecosystem is built on this premise. Let some other language (C) do the fast stuff and leverage that for your applications. It's not a niche language, even though you don't have direct control over things like memory management and GC.

Perhaps the commenter's role of real time graphics programming is actually the niche.

dsego
re python and scripting, afaik used to be known as "glue languages". not sure how true it is anymore.
skohan
> This is true for any company at scale. FP can be used at the fringe or the edge, but the core part demands efficiency.

I think things like real-time graphics are the exception not the rule. Most of the software run by users these days is in the context of a browser, which is implemented many layers of abstraction away from the machine. Much of the code running servers is also still interpreted scripting languages.

Don't get me wrong, I wish a lot more software was implemented with performance in mind, because the average-case user experience for software could be so much better, but a ton of the software we use today could be replaced by FP and perform just as well or better.

Ragib_Zaman
Perhaps not a satisfactory response but when I start drifting towards thinking FP is fundamentally not as performant as _whatever_else_, I remember that Jane Street uses OCaml basically from top to bottom, and they certainly can't be too slow... Some black magic going on there.
typon
Jane Street wrote a compiler that converts Ocaml to Verilog which they run on FPGAs. The OCaml you write in that case is pretty different than what you write for CPUs.
gpderetta
"A FORTRAN programmer can write FORTRAN in any language"
classified
And structured programming (including, of course, FP) is for quiche eaters!
classified
For those who don't get the allusion:

https://web.mit.edu/humor/Computers/real.programmers

Ragib_Zaman
That certainly explains a whole lot. Thanks!
throwaway87537
Streeter here.

Although I don’t work directly with the FPGA stuff, it’s still a very, very small piece of the overall pie (and new).

The motivation behind using Ocaml is mainly in its correctness(!) not because it’s fast (it’s not). See Knight Capital for a good example as to why. There are great videos on YT by Yaron Minsky that explain this better than I can.

dooglius
> See Knight Capital for a good example as to why

Not really, that was about incompatible versions of software talking to each other, which would not really fall under what is meant by "correctness".

fennecfoxen
While Knight Capital's problem was caused by incompatible versions of the software talking to each other, one of the reasons that happened was the deployment of the code was not correct. The SEC notes:

> “During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.

Rumor on the outside suggests that Jane St uses OCaml for things like deploying software.

gowld
How does OCaml prevent you from deploying working code that you didn't want to run?
johnisgood
Speaking of correctness and performance, was Ada/SPARK ever considered before you guys picked OCaml? Hypothetically, would you consider Ada/SPARK now, especially that SPARK has safe pointers and whatnot?
Jtsummers
SPARK was barely making waves when they selected OCaml. OCaml was much more established by that time.
johnisgood
Fair enough. Then they should probably just focus on the second question. :)
samvher
There are really many places in code where you really don't care about performance that much. And in that case conciseness/expressiveness/correctness can be extremely valuable.

Also there are different aspects to performance, and when (for example) it comes to latency, a platform like Erlang/BEAM makes it particularly easy to get low latency in certain contexts without thinking about implementation much. In Haskell you can accomplish similar things with green threads. It will probably need more clock cycles for a given action than a tuned C implementation but that's not always what matters, and the code will probably be cleaner.

ummonk
Is the kind of HFT that Jane Street does that reliant on extremely low latency? A lot of HFT firms operate on the timescale of seconds, minutes, or even hours, not milliseconds.
thedufer
I feel like there must be terminology confusion here - the HF in HFT stands for high frequency, which effectively means low latency. There may be HFT firms that additionally do slower stuff, but no one would call a trade on the timescale of hours HFT - it's a fuzzy line, but certainly nothing measured in units larger than microseconds would qualify to someone in the industry, and the line is likely lower than that.
tom_mellior
The "oh no it's slow, and you can't reason about performance" FUD is mostly directed at Haskell's lazy evaluation, but people like to throw it vaguely in the direction of "FP" in general. Most of the performance problems you have in Haskell (as in this recent submission: https://news.ycombinator.com/item?id=21266201) are not problems you will have in OCaml.

Yes, OCaml has garbage collection. It's a very efficient GC, and it is only ever called when you try to allocate something and the system determines that it's time for cleanup (https://ocaml.org/learn/tutorials/garbage_collection.html, though this might change if/when Multicore OCaml ever happens?). So if you write an innermost function that does arithmetic on stuff but never allocates data structures, you will not have GC problems because you will not have GC during that time, period.

Also, there are cases where destructive mutation of things is more efficient than making pure copies. OCaml allows you to do that, you don't need to fiddle with monads to simulate state.

There really isn't that much black magic there. Just don't believe everything that is said about "FP".

Symmetry
Something I've always wondered about Haskell. Given referential transparency, purity, etc shouldn't it be possible for the Haskell compiler to choose whether to evaluate something in an eager or lazy fashion depending on performance heuristics under the covers? You have to make sure you don't ever accidentally do that with an infinite list but it seems that there ought to be lots of scope for optimization and speeding up code written in a straightforward fashion. Possibly also turning lists into arrays secretly too if it can be proven to produce the same result.
verttii
The point of lazy evaluation is to evaluate something only when you really need it. Not to auto-adjust the system performance wise. You can force eager evaluation in places where you want it evaluated sooner.

Personally, I think consistency is more important here because it leads to better predictability. If you don't know whether the compiler assigns something to be evaluated lazily or eagerly that could lead to a lot of nasty debugging issues.

verttii
Got it thanks, good info for me!
tom_mellior
> If you don't know whether the compiler assigns something to be evaluated lazily or eagerly that could lead to a lot of nasty debugging issues.

If the compiler only forces values that would be forced anyway, there shouldn't be a problem. Which is why GHC actually does it: https://wiki.haskell.org/Performance/Strictness

Strictness analysis is good and useful... and difficult and not magic.

tome
Yes, it's called strictness analysis and is an important part of the GHC pipeline.
gpderetta
My understanding is that OCaml is a decent imperative language when required and that's one reason why it can perform well.

My usual issue with the 'you can avoid the GC by not allocating' claims in any language, is how much of the language is still usable? Which features of the language allocate under the hood? Can I use lambdas? pattern matching? list compression or whatever nice collection is available in the language?

Note that I do agree that even in very high performance/low latency applications, there will be components (even a majority of them) that will be able to afford GC without issues; but then it is important to be able to isolate the critical pieces from GC pauses (for example, can I dedicate one core to one thread and guarantee that GC will never touch that thread?)

tom_mellior
> Can I use lambdas?

Not ones that capture anything from the environment. I'm not sure about ones that don't, but I imagine you can use them.

> pattern matching?

Yes.

> list compression

List comprehensions, you mean? They don't exist in OCaml, but if they did, they would have to allocate a result list. So no.

> for example, can I dedicate one core to one thread and guarantee that GC will never touch that thread?

I don't think so, maybe someone else can chime in. But more importantly, this is a one-in-a-million use case that is a far cry from "functional programming is always bloated and slow and unpredictable". GC is well-understood, advanced, and comfortably fast enough for most applications. And for other cases, if you really need that hot innermost part, write it in C and call that from OCaml if you're paranoid about it.

weberc2
I’m all for mechanical sympathy, and I’m sure that some programmers legitimately need their language to map neatly to assembly, but that isn’t the norm as evidenced by the overwhelming popularity of interpreted and VM languages. Lots of companies are making money hand over fist with Python, which is probably 3 orders of magnitude (or more) slower than the optimized C or C++ that this real-time graphics engineer is writing.

EDIT: Is this controversial? What are downvoters taking issue with? That Python is a very popular language? That it is much slower than C/C++?

gameswithgo
we users pay a price for this trend though :( Those python users could switch to F#/OCaml/Clojure and get a big speed boost too!
weberc2
You missed my point—the original comment argued that the software industry is in such great need for performance that functional programming is unacceptable (apart from niche use cases). If that we’re true, then Python and other slow languages wouldn’t be so wildly used. I fully agree that we’re leaving performance on the table by not using functional languages or virtually any other tier of languages including my personal favorite: Go.
StreamBright
Python is perfectly fine for glueing together functionality that deals with high latency systems, like a data pipeline that executes queries that are running for minutes. It does not matter from the performance point of view if this code is in Forth or Python, but it matters from the point of view how long does it take to implement and how many engineering hours need to go into it. This is why Python is a good option and F#/OCaml/Clojure is not going to bring much to the table in such cases. Even if I want to implement the ETL pipeline in Clojure or OCaml the rest of the engineers are not ok with these languages, the build systems involved and package management or IDE options, integrations. These are also factors when you select a language. Clojure is my absolute favourite language but there are no people in most of the companies where I work who could buy into it. Management sees it as a risk, not to be able to hire engineers for it. There are many dimensions to this problem than it appears at first.
weberc2
I would not advertise Python as a language with good IDE support nor package management. I fight with both of these regularly. Also, Python is reasonably suited for gluing together other high performance systems, but not everything in the world is glue code, and as soon as you need to do something O(n) on your dataset, you’re either paying an enormous performance penalty or you’re not writing that bit in Python. People kid themselves into thinking that Python’s C interop will make things fast, and sometimes it does, but it often makes it even slower if the system needs to cross the language boundary O(n) or worse.
StreamBright
>> I would not advertise Python as a language with good IDE support nor package management.

VS Code, VIM works for me. Conda or PIP also. Not sure what is missing for you.

>> but not everything in the world is glue code

I never claimed that.

>> and as soon as you need to do something O(n) on your dataset, you’re either paying an enormous performance penalty or you’re not writing that bit in Python

Depends what you need to do.

My entire comment was about that details matter and you can't just blindly pick a language because of out of the box performance.

weberc2
I use VS Code too, but dynamic typing means I have to deal with this sort of thing every day: https://mobile.twitter.com/weberc2/status/118275131245637632...

Compared with, say, Go where I just hover the cursor.

As for pip, you also need virtual environments to protect you from side effects, and even then, if you’re doing C interop you probably still have dynamic links to so files outside of your virtualenv. Our team spends so much time dealing with environment issues that we’re exploring Docker solutions. And then packaging and distribution of Python artifacts is pretty awful. We’re using pantsbuild.org to build PEX files which works pretty well when it works, but pants itself has been buggy and not well documented.

> I never claimed that

I couldn’t tell since the context of the thread made it sound like you were either implying that Python is suitably performant because the majority of programming is glue code or you were going somewhat off topic to talk about glue code. I now understand it was the latter.

> Depends what you need to do. My entire comment was about that details matter and you can't just blindly pick a language because of out of the box performance.

I agree, but in practice you rarely know the full extent of what you will need, so you should avoid painting yourself into a corner. It really doesn’t make sense to choose Python any more if you are anything less than certain about the performance requirements for your project for all time—we now have languages that are as easy to use as Python (I would argue even easier, despite my deep familiarity with Python) and which don’t paint you into performance corners. Go is probably the best in class here, but there are probably others too.

StreamBright
>> Go is probably the best in class here

Sorry no offence but I do not want to write Go at all. If I want to use such a language I will use Rust with nicer features and better out of the box performance (see TechEmpower results), no GC and more safety (no memory corruption bugs or data races).

I am not sure if I am the one who paints himself into a corner.

weberc2
Go for it (why would I take offense?). Rust is a fine choice. I'd pick Rust if I ever really needed to eek out every bit of performance out of my system and/or needed safety and was willing to trade quite a lot of developer productivity to get there. Mission critical real-time systems, high-end games, resource-constrained devices, etc are good applications for Rust (and of course there are many others). Rust is a great language and I'd like to use it more often; I just can't justify the productivity hit.

If Rust ever approaches Go's ease of use/learning curve/etc without losing its performance or incurring other costs, I'll happily make the switch for my productivity-sensitive applications as well.

goatlover
> but not everything in the world is glue code, and as soon as you need to do something O(n) on your dataset, you’re either paying an enormous performance penalty or you’re not writing that bit in Python.

And yet Python has one of the richest and most widely used scientific computing stacks. If writing performant code in a friendly language is all that important, then Julia stands as a more reasonable alternative than does some functional language.

hechang1997
That's exactly why people vectorize their code: to avoid slow loops in python and move them to the underlying c/cpp code.
DubiousPusher
Classic games programmer logic. I've heard it a billion times. "X feature abstracts the hardware and therefore I could never use it for programming in my tiny niche and therefore it could never become popular."

And how many of the top 10 languages are running in a virtual machine? Which could be literally doing anything under the hood with your allocations, caching, etc?!

There is nothing wrong with saying, I don't see this working out in my domain due to these concerns it's just silly to say, I never see it taking off because it can't work in my domain.

I think this video nails it pretty dead on. My team works almost exclusively in C# these days for reasons mostly beyond our control. The team generally likes the language quite a bit (it's one of my personal favorites). But when I find myself asking for new features they come in two buckets. I'd like feature that help certain members of my team write less side effect heavy code and I'd like immutability by default with opt-in mutability. Basically I'd like more functional like features. But hey, that's what I see from my niche.

com2kid
> And how many of the top 10 languages are running in a virtual machine? Which could be literally doing anything under the hood with your allocations, caching, etc?!

VMs provide an environment, just like any other. Javascript articles are chock full of information on how to not abuse the GC by using closures in the wrong place. C#'s memory allocation is very well defined, Java has a million tuning parameters for their GC, Go is famous for providing goroutines with very well defined characteristics.

Heck people who know C# can look at C# code and tell you almost exactly what the VM is going to do with it. And now days C# allows direct control over memory.

People writing high performance code on Node know how the runtime behaves, they know what types of loads it is best for, none of that is a mystery.

Sure some details like "when does this code get JITed vs interpreted" are left up to the implementation, but it isn't like these things are secret. I think every major VM out there now days is open source, and changes to caching behavior is blogged about and the performance implications described in detail.

The fact is, all programming paradigms are merely ways to limit our code to a subset of what the machine can do, thereby making reasoning about the code easier.

They are purely mental tools, but they almost all have a performance cost for using them. They are turing complete tools of course, any solution is theoretically solvable with any of the major paradigms, but not every paradigm is appropriate for every problem.

So, you know, pick the paradigm that makes it easiest to reason about the problem space, given acceptable performance trade offs.

DubiousPusher
Yeah, this is literally my point.

I was quibbling with the point that because FP languages often don't give low level control they can't become successful even though nearly every language on that top ten list suffers from the same perf oriented deficiency.

To write performant code in any of those top ten languages you have to understand the characteristics and nuances of the underlying tech stack.

And honestly people who don't write performant Java because they didn't bother to learn about the GC wouldn't have magically done otherwise writing C++. Trust me, that language does not intrinsically cause you to write performant code. It does intrinsically cause you to leak memory though.

But the bigger point is that in many domains performance is second to many other concerns. Like you said pick the languages that matches your needs.

So I think we pretty much agree.

StreamBright
>> This is true for any company at scale

I do not think this is true outside your domain. Amazon uses Java, C++ and Perl. At the time I was there majority of the website code was in Perl. Amazon one of the biggest companies on the planet.

hyperpallium
Amazon needed to create AWS to be able to run all that perl performantly (joke).

Actually, a lot of programming language improvements have come from trying to make lisp performant.

verttii
Many functional languages acknowledge this and are not pretending to have low-level language facilities. Instead, they have built-in mechanisms (FFI) to interface with those low level languages whenever needed.
blain_the_train
Are you suggesting that oop allows programmers to understand the assembly output?

Did you watch the video? The most popular language is JavaScript, which is only not functional but a quirk of history.

The video makes an argument for marketing being the reason.

cr0sh
The fun, and funny, and maybe (probably) even terrible, thing about javascript is that while it -is- functional, and (IIRC) always has been, historically it wasn't originally used that way!

Only relatively recently have programmers embraced its functional aspects; prior to that it was mostly used as a procedural language.

Then people started to used functional aspects of it to "shoehorn" it into allowing a quasi-OOP form of programming style, and this form has been baked (in no small part) into the latest version of ECMAScript.

But people following this path, coupled with (I believe) using JQuery, NodeJS, and other tools (and now React) have led most of them (raising hand here) to more fully embrace it as a functional language.

But here's the thing:

You can still use it as a procedural language - and an OOP language - and a functional language! All at the same time if you want - it doesn't care (much)! It's like this weird mismash of a language, a Frankenstein's Monster coupled to Hardware's killer robot.

Yes - with today's Javascript you can still write a unicorn farting stars that follows your mouse on a webpage while playing a MIDI file - and do it all procedurally. In fact, there's tons of code examples out there still showing this method.

You can mix in simple class-like constructs using anonymous functions and other weirdness - or just use the latest supported ECMAScript OOP keywords and such - go for it!

Want to mix it up? Combine them both together - it don't care!

Oh - and why not pass a function in and return one back - or an entire class for that matter! It's crazy, it's weird, it's fun!

It's maddening!

And yes - it's a crazy quirk of history - a language that was created by a single programmer over the course of a weekend (or so legend goes) at Netscape has and is seemingly taking over the world of software development.

Not to mention Web Assembly and all that weirdness.

I need to find an interview with that developer; I wonder what he thinks about his creation (which is greatly expanded over what he started with, granted) and it's role in software today - for good or ill...

nybble41
> Oh - and why not pass a function in and return one back

The "function" in "functional programming" is a reference to mathematical functions. Mathematical functions do not have side effects, and consequently are referentially transparent (the result doesn't depend on the evaluation order or on how many times the function is evaluated). Code with side effects is not a function in the mathematical sense, it's a procedure. The defining characteristic of functional programming is the absence of side effects. That isn't something you can just tack on to an imperative (or "multi-paradigm") language. No matter how many cosmetic features you borrow from functional languages, like closures and pattern-matching and list comprehensions, you still have the side-effects inherent in the support for imperative code, which means your program is not referentially transparent.

Haskell manages to apply the functional paradigm to real-world programs by essentially dividing itself into two languages. One has no formal syntax and is defined entirely by data structures (IO actions). This IO language is an imperative language with mutable variables (IORefs) and various other side-effects. The formal Haskell syntax concerns itself only with pure functional code and has no side effects. IO actions are composed by applying a pure function to the result of one IO action to compute the next action. Consequently, most of a Haskell program consists of pure code, and side-effects are clearly delineated and encapsulated inside IO data types at the interface between the Haskell code and the outside world.

ummonk
You could write pure code in JS. You'd just need a linter to enforce it.
nybble41
You can write pure code in almost any language, though some make it easier than others with features like lexical closures and algebraic data types. But is that the idiomatic form? Is it evaluated efficiently? And can you count on libraries to be written the same way?

Javascript without any support for mutation or other side effects wouldn't really be recognizable as Javascript any more.

ummonk
It's not required to write idiomatic JS, but pure functional code is very much idiomatic. (I.e. pure functional code isn't considered unidiomatic in JS as it might be in some other languages). Many people write the meat of their code in pure functions, and this paradigm is encouraged by React (especially with the invention of hooks).

As for libraries, you can just treat them as stateful external things you have to interact with the same way IO / network calls are stateful.

goatlover
JS isn't really a functional language, it's an imperative, prototype-based language with partial functional support that allows multiple coding styles, like most of the popular scripting languages.
blain_the_train
Yes. Which is why I said it's not functional. Because of a marketing choice.
mannykannot
OK, but dismissing most of the things that people use computers for as "the fringe" is rather parochial.
gameswithgo
I think he means more "the outer layers". For instance Alpha Go was written in Python at the outer layers. The core pieces doing the actual neural net work are low level libraries.

Similarly, google search, at the outer most layers, is javascript, then probably some layer of Go or similar, but then the core is highly tuned C++ and assembler.

bpyne
I had a programming languages survey class in college. The thrust of the course was different languages for different applications. It's not startling that the person, whose comment you quoted, doesn't find the functional paradigm helpful in graphics programming. Functional programming helps with expressiveness and reasoning, i.e. variables don't suddenly change on you when you're not expecting it.

A video game programmer would probably not be helped because a big part of their coding, as I understand it, is wringing out every clock cycle and byte of memory possible. However, the programmer writing the AR/AP system that allows tracking for in-game purchases would find OCaml, for instance, very beneficial.

journalctl
Unless you’ve written a modern, optimizing C/C++ compiler, you have absolutely no idea what kind of machine code a complex program is going to spit out. It’s not 1972 anymore, and C code is no longer particularly close to the metal. It hasn’t been for some time.
gpderetta
Never written even a simple C compiler, but I, and most c++ programmers that care about performance I think, do have a decent idea what code g++ is going to generate.
asjw
But g++ is probably better than you at producing fast code for whatever architecture you run it on.
gpderetta
Of course it is, that's why I let it generate it.
Crinus
This is wrong, you absolutely can have an idea of what a C (and most of the time, C++) compiler will generate. You may not know the exact instructions, but if you are familiar with the target CPU you can have a general idea what sort of instructions will be generated. And the more you check the assembly that a compiler generates for pieces of code, the better your idea will be.

Note that you almost never need to care about what the entirety of a "complex program" will generate - but often you need to care about what specific pieces you are working on will generate.

The C language itself might be defined in terms of an abstract machine, but it is still implemented by real compilers - compilers that, btw, you also have control over and often provide a lot of options on how they will generate code.

And honestly, if you have "absolutely no idea what kind of machine code" your C compiler will generate then perhaps it it will be a good idea to get some understanding.

(though i'd agree that it isn't easy since a lot of people treat compiler options as wells of wishes where they put "-O90001" and compiler developers are perfectly fine with that - there is even a literal "-Ofast" nowadays - instead of documenting what exactly they do)

gpderetta
To be fair, most of the set of optimization parameters enabled for the various for -Ox, including x=fast, is usually documented.

At least in GCC though, there are a few optimizations included in the various -O flags that have no corresponding fine grained flag (usually because they affect optimization pass ordering or tuning parameters).

Crinus
Yes they are documented, though the documentation is really something like "-fawesome-optimization, enabled by default on -O3" and the "-fawesome-optimization" has documentation like "enables awesome optimization" without explaining much more than that.

And even then pretty much every project out there uses "-Ofast" instead of whatever "-Ofast" enables without caring about what it does or how its behavior will change across compilers.

gpderetta
-Ofast enables fast-math optimizations and generally is not standard compliant. I hope projects do not deliberately enable it without thinking (as they say, it is hard to make stuff fool proof because fools are so resourceful).
Crinus
My point was that options like -O<number> and -Ofast aren't the actual optimization switches, they turn on other switches and you do not know what you'll get - essentially wishing for fast code and hoping you'll get some (i mentioned -Ofast explicitly because of its name).

For example according to the documentation in GCC 7.4 -O3 turns on:

    -fgcse-after-reload
    -finline-functions
    -fipa-cp-clone
    -fpeel-loops
    -fpredictive-commoning
    -fsplit-paths
    -ftree-loop-distribute-patterns
    -ftree-loop-vectorize
    -ftree-partial-pre
    -ftree-slp-vectorize
    -funswitch-loops
    -fvect-cost-model
whereas in GCC 9.2 -O3 turns the above, plus:

    -floop-interchange 
    -floop-unroll-and-jam 
    -ftree-loop-distribution 
    -fversion-loops-for-strides
So unless you control the exact version of the compiler that will generate the binaries you will give out, you do not exactly know what specifying "-O3" will do.

Moreover even though you do know the switches, their documentation is basically nothing. For a random example what "-floop-unroll-and-jam" does? The GCC 9.2 documentation combines it with "-ftree-loop-linear", "-floop-interchange", "-floop-strip-mine" and "-floop-block" and all it says is:

> Perform loop nest optimizations. Same as -floop-nest-optimize. To use this code transformation, GCC has to be configured with --with-isl to enable the Graphite loop transformation infrastructure.

...what does that even mean? What sort of effect will those transformations have on the code? Why are they all jumbled in one explanation? Are they exactly the same? Why does it say that they are the same "-floop-nest-optimize"? Which option is the same? All of them? The -"floop-nest-optimize" documentation says:

> Enable the isl based loop nest optimizer. This is a generic loop nest optimizer based on the Pluto optimization algorithms. It calculates a loop structure optimized for data-locality and parallelism. This option is experimental.

Based on the Pluto optimization algorithms? Even assuming that this refers to "PLUTO - An automatic parallelizer and locality optimizer for affine loop nests" (this is a guess, no other references in the GCC documentation as far as i can tell), does it mean they are the same as the the code in pluto, that they based on the code and are modified or that they are based on the general idea/concepts/algorithms?

--

So it isn't really a surprise that most people simple throw out "-Ofast" (or -O3 or -O2 or whatever) and hope for the best. They do not know better and they cannot know better since their compiler doesn't provide them any further information. And this is where all the FUD and fear about C's undefined behavior comes - people not knowing what exactly happens because they are not even told.

undershirt
From: https://queue.acm.org/detail.cfm?id=3212479

> Compiler writers let C programmers pretend that they are writing code that is “close to the metal” but must then generate machine code that has very different behavior if they want C programmers to keep believing that they are using a fast language

Crinus
That article relies on the flawed premise that because modern CPUs do not expose their real inner workings then C is not a low level language. However this is irrelevant because as a programmer you do not have any access below what the CPU itself exposes - if the CPU exposes an API (its instruction set) that pretends to be serial then it doesn't matter if underneath the seams things happen in parallel since you simply are not given any control over that. From the perspective of someone who is working against such an instruction, C is a low level language since there is little lower level between it and what is exposed to the programmers by the CPU.

Beyond that it doesn't really invalidate anything i wrote and is only tangentially relevant to my comment (where i didn't even mentioned C as a low level language, i only said that you can have an idea of what sort of instructions a C compiler will generate for a piece of code if you study its output for a while), why did you post it without any comment of your own?

louthy
I've been programming for 34 years - 25 of those professionally. In the early days of my programming life it was super important to know how the code would run on the metal (and much of my time was spent writing assembler for that reason). Processors were slow, memory access was slow, and memory quantity was small (32k on my first computer). I spent a decade in the games industry building graphics engines and hand interleaving assembler to get the maximum amount of juice out of whatever console I was writing for (I beat what Sony said their Playstation 1 could actually do).

Then OOP happened and much of the early guarantees about how something ran went away, abstraction everything meant we couldn't reasonably know what was happening behind the scenes. However, that wasn't the biggest issue. Performance of CPUs and memory had improved significantly to the point where virtual method calls weren't such a big deal. What was becoming important was the ability to manage the complexity of larger projects.

The big deal has come recently with the need to write super-large, stable applications. Sure, if you're writing relatively small applications like games or apps with limited functionality scope, then OOP still works (although it still has some problems). But, when applications get large the problems of OOP far outstrip the performance concerns. Namely: complexity and the programmer's inability to cognitively deal with it.

I started a healthcare software company in 2005 - we have a web-application that is now in the order of 15 million lines of code. It started off in the OOP paradigm with C#. Around 2012 we kept seeing the same bugs over and over again, and it was becoming difficult to manage. I realised there was a problem. I (as the CTO) started looking into coping strategies for managing large systems, the crux of it was to:

* Use actor model based services - this helped significantly with cognition. A single thread, mutating a single internal state object, nice. Everyone can understand that.

* Use pure functional programming and immutable types

The reason pure functional programming is better (IMHO) is that it allows for proper composition. The reason OOP is worse (IMHO) is because it doesn't. I can't reasonably get two interfaces and compose them in a class and expect that class to have any guarantees for the consumer. An interface might be backed by something that has mutable state and it may access IO in an unexpected way. There are no guarantees that the two interfaces will play nicely with each other, or that some other implementation in the future will too.

So, the reality of the packaging of state and behaviour is that there's no reliable composition. So what happens is, as a programmer, I'd have to go and look at the implementations to see whether the backing types will compose. Even if they will, it's still brittle and potentially problematic in the future. This lack of any kind of guarantee and the ongoing potential brittleness is where the cognitive load comes from.

If I have two pure functions and compose them into a new function, then the result is pure. This is ultimately (for me) the big deal with functional programming. It allows me to not be concerned about the details within and allows stable and reliable building blocks which can be composed into large stable and reliable building blocks. Turtles all the way down, pure all the down.

When it comes to performance I think it's often waaaay overstated as an issue. I can still (and have done) write something that's super optimised but make the function that wraps it pure, or at least box it in some way that it's manageable. Because our application is still C# I had to develop a library to help us write functional C# [1]. I had to build the immutable collections that were performant - the cost is negligible for the vast majority of use-cases.

I believe our biggest problem as developers is complexity, not performance. We are still very much working with languages that haven't really moved on in 20+ years. Yes, there's some improvements here and there, but we're probably writing approximately as many lines of code to solve a problem as we were 20 years ago, except now everyone expects more from our technology. And until we get the next paradigm shift in programming languages we, as programmers, need coping strategies to help us manage these never-ending projects and the ever increasing complexity.

Does that mean OOP is dead as an idea? No, not entirely. It has some useful features around extensible polymorphic types. But, shoehorning everything into that system is another billion dollar mistake. Now when I write code it always feels right, and correct, I always feel like I can trust the building blocks and can trust the function signatures to be honest. Whereas the OOP paradigm always left me feeling like I wasn't sure. I wish I'd not lost 10+ years of my career writing OOP tbh, but that's life.

Is functional programming a panacea? Of course not, programming is hard. But it eases the stress on the weak and feeble grey matter between our ears to focus on the real issue of creating ever more impressive applications.

I understand that my reasons don't apply to all programmers in all domains. But when blanket statements about performance are wheeled out I think it's important to add context.

[1] https://github.com/louthy/language-ext

scarejunba
After Minecraft, none of this makes any sense anymore. That is a genre-defining cultural masterpiece and runs on the JVM which is fosbury-flopping all over your registers on purpose. And everyone used to repeat some folk wisdom about Java back then.
hinkley
I had some truly dismal experiences with code generators in the 90's. At one point I blamed code generation itself, automatically dismissed any solution out of hand, and was proven right quite a few times.

It wasn't until I used Less on a project that I encountered a generator that did what I expected it to do in almost every situation. It output essentially what I would have written by hand, with a lot less effort and better consistency.

I expect people who adopted C felt roughly the same thing.

People presenting on OOAD occasionally warn about impedance mismatches between the way the code works and the way you describe it[0]. If what it 'does' and how you accomplish it get out of sync, then there's a lot of friction around enhancements and bug fixes.

It makes me wonder if this is impossible in FP, or just takes the same degree of care that it does in OO.

[0] a number of cow-orkers have 'solved' this problem by always speaking in terms of the code. Nobody has the slightest clue what any of them are talking about 50% of the time.

arximboldi
There is some truth to that. I write a lot of performance intensive interactive software (music software, graphics software), and use C++ for that. However, you can bring a lot of the FP into C++ world, use it when appropriate, and reason about performance all the way through. I spend a lot of time building tools to make it easier, like for example: https://github.com/arximboldi/immer https://github.com/arximboldi/lager
AndrewStephens
I agree. I haven’t done a lot of FP but, as a person who is used to knowing how my code will be executed, I find it very difficult to map what I want the machine to do onto functional code.

Functional Programming might have great advantages in correctness but sooner or later the code is going to be run on a real CPU with real instructions and all the mathematical abstractions don’t mean much there.

That said, I can see they have their place for specialized areas.

dustingetz
A niche like JVM and AWS
namelosw
The exact same could be used to describe SQL/Bash/Lua etc, which sometimes used by accountant/admin/game designer. Those languages are at the sometimes at the edge of the systems, but obviously not non-starter.
jweir
You have inverted this - realtime graphics is the fringe. Most code written does not need to be optimized for the machine. it needs to be optimized for business logic and human understanding.
jimbokun
But it should be fine, then, as a replacement for anything written in Javascript, Ruby, Python, etc.

(Also, addressed in the video at the end, answering an audience member question.)

keymone
> core part demands efficiency

what about development efficiency? maintenance efficiency? developer onboarding efficiency? there are many efficiencies companies care about.

de_watcher
I'm a true believer that with C++ and lots of scary templates you can do FP that maps exactly on what you want it to be doing.
jolux
Jane Street does a fair bit of high-performance OCaml...
agoodpr838
Sure for hardware, real time, embedded use cases (and probably others), makes sense.

Does it matter for data analysis and most web apps, infra as code, etc? Which data scientists do you know fetishize how Python is laying out memory?

OOP is a hot mess. Yes, I know, you’re all very well versed in how to use it “right”, but the concept enables a mess. It’s the C of coding paradigms when it would be great to have a paradigm that pushes towards Rust, and reduces the chance for hot messes from the start.

Most of this work is organizing run of the mill business information. Why it works from a math perspective is more universally applicable and interesting anyway.

cousin_it
> OOP is a hot mess

Since most people can't program, I think technologies that allow messiness in the name of accessibility (OOP, Excel, Flash) are a net good.

crimsonalucard
With JavaScript as the most popular language out there I would say that this low level core stuff is now the fringe.
bryanphe
In terms of performance, the way we build applications today is such a low bar that IMO it opens the door for functional programming. Even if it is not as fast as C or raw assembly - if it is significantly faster than Electron, but preserves the developer ergonomics... it can be a win for the end user!

I created an Electron (TypeScript/React) desktop application called Onivim [1] and then re-built it for a v2 in OCaml / ReasonML [2] - compiled to native machine code. (And we built a UI/Application framework called Revery [3] to support it)

There were very significant, tangible improvements in performance:

- Order of magnitude improvement in startup time (time to interactive, Windows 10, warm start: from 5s -> 0.5s)

- Less memory usage (from ~180MB to <50MB). And 50MB still seems too high!

The tooling for building cross-platform apps on this tech is still raw & a work-in-progress - but I believe there is much untapped potential in taking the 'React' idea and applying it to a functional, compile-to-native language like ReasonML/OCaml for building UI applications. Performance is one obvious dimension; but we also get benefits in terms of correctness - for example, compile-time validation of the 'rules of hooks'.

- [1] Onivim v1 (Electron) https://github.com/onivim/oni

- [2] Onivim v2 (ReasonML/OCaml) https://v2.onivim.io

- [3] Revery: https://www.outrunlabs.com/revery/

- [4] Flambda: https://caml.inria.fr/pub/docs/manual-ocaml/flambda.html

gowld
Brilliant. Haskell was standing outside the door not until it was good enoguh to be an industry standard, but until industry standards dropped so low that it became competitive!
cztomsik
Hey, and good luck with revery :) I am doing something very similar but I wouldn't ever consider any GC language for the low-level part.

I want to write UI in javascript because it's really nice language for prototyping but I also want it to be fast and javascript is unpredictable. Now, this might not be the case with OCAML but no matter what optimizations your compiler (or JIT interpreter) can do, you're still living in a lie, it's still some abstraction which is going to leak at some point.

I've recently removed quite a lot of rust dependencies (wrappers) and the speedup is very noticable, it's because abstractions always come with a cost and you can't just pretend you're living in a rainbow world.

BTW: you're not going to get much lower than 50M, cocoa has some overhead (10M IIRC), node has too (20M) and OCaml GC needs some heap too, and if you have any images, you need to keep them somewhere before sending to the GPU and GC to be fast needs to keep some mem around so that allocs are faster than just plain malloc.

BTW2: in rust world, it's common to see custom allocators and data-oriented programming because it starts to get noticeable and this is hard to do if you can't reason about memory.

If anyone is interested too, here's a repo https://github.com/cztomsik/graffiti

classified
I couldn't have said it better myself. Thx for those links! And, yes, the compiler's flambda variant is an exquisite delight.
novok
Is it fast because native & types or fast because of other reasons? The speed hierachy I've found goes: dynamic types w/ GC = 8x slower than C, static types w/ GC = 3x slower than C & static types w/ hybrid memory management like reference counting = 2x slower than C.
hiccuphippo
Does Revery use a native toolkit (winforms, gtk, etc) or is it also a webview like electron?

I've seen a couple of gui toolkits in rust following the Elm architecture and I think it's an amazing idea. It would be great if I was able to create apps like this using something like Qt behind the scenes.

cztomsik
revery does custom rendering on GPU, just like flutter & switfui (and graffiti)
tick_tock_tick
They already said they were working in games. None of what you said applies to that field.
Scarbutt
I would say "real time graphics" is one of the niches FP is not well suited for, most business software doesn't need to work at the level of the machine.
pjmlp
Ironically the first CAD workstations were developed in Lisp, and Naughty Dog is famous for their Lisp/Scheme based engines.
grumpyprole
There is certainly prior art for complex games running smoothly in Haskell: https://wiki.haskell.org/Frag

This particular solution used functional reactive programming, essentially a composition of signal/event processing functions/automatons.

jstimpfle
If I remember correctly, in that thesis the author mentioned explicitly that the game didn't run very fast. If you watch the video from 2008, the in-game stats list framerates >60fps but the game itself is very laggy. Maybe there is a separate renderer thread?
ansible
Ten years ago, that was the only substantial game written in Haskell. That you're citing that same game now is a bit telling.

Note the upload date:

https://www.youtube.com/watch?v=0jYdu2u8gAU

willtim
Ok here's a talk about making Haskell games that took place last week: https://keera.co.uk/blog/2019/09/16/maintainable-mobile-hask... I don't deny that making games in Haskell is niche, but it's certainly possible. Frag was just an example I remembered (ten years is recent for an old git like me).
yogthos
Here's a talk on making real world commercial games with Clojure on top of Unity.

https://www.youtube.com/watch?v=LbS45w_aSCU

jcelerier
come on, the "games" showcased here have the complexity level of a 2003-like game and they barely achieve 200 fps on modern hardware. When I look at similar trivial things ran with no vsync on my machine, it's >10000 fps
yogthos
That's just moving goalposts. The games showcased are the same complexity as plenty real world commercial games that are making good money in 2019. If you're doing triple-A game development, maybe you need to get down to the metal, but for tons of games you'll be perfectly fine with FP.

Also worth noting that the idea is to use FP around stuff like the actual game logic, and then handle rendering details imperatively.

thowfaraway
The Poker prototype could be from 30 years ago, and drops to 15FPS on any game action! Arcadia is a neat toy at this point, but run far away if you are looking to do real world commercial development.
jcelerier
> The games showcased are the same complexity as plenty real world commercial games that are making good money in 2019

I mean, fucking todo apps are making "good money" in 2019, it does not mean that they are good examples. These kind of presentations should improve on the state of the art, not content themselves with something that was already possible a few decades ago. No one gets into game dev to make money, the point is to make better things than what is existing - be it gameplay wise, story wise, graphics wise...

thowfaraway
I think you are seriously overselling the talk, and what Arcadia is ready for.

you: Here's a talk on making real world commercial games with Clojure

video: dozens of game jam games have been made

dllthomas
Even assuming that that's true (and it very well may be), the general topicwasn't games, and there are many places where "the norm" in programming as a whole differs from the norm in performance sensitive areas.
shadowgovt
I have a suspicion this is only semi-true.

For controlling what the CPU and RAM are doing? Yes. The graphics shader, on the other hand, is a pipeline architecture with extremely tight constraints on side-effects. The fact the shader languages are procedural seems mostly accident of history or association to me than optimal utility, and the most common error I see new shader developers make is thinking that C-style syntax implies C-style behaviors (like static variables or a way to have a global accumulator) that just aren't there.

The way the C-style semantics interface to the behavior of the shader (such as shader output generated by mutating specifically-named variables) seems very hacky, and smells like abstraction mismatch.

Const-me
> is a pipeline architecture with extremely tight constraints on side-effects

That was true 10 years ago. Now they're just tight constraints but not extremely so: there're append buffers, random access writeable resources, group shared memory, etc.

> The way the C-style semantics interface to the behavior of the shader seems very hacky

I agree about GLSL, but HLSL and CUDA are better in that regard, IMO.

vbarrielle
Not exactly shaders, but for GPGPU stuff, futhark [0] seems to show that a functional paradigm can be very good to produce performant and readable code.

[0] https://futhark-lang.org/index.html

Oct 17, 2019 · 3 points, 2 comments · submitted by oska
fargle
Good presenter. Critique: I think an awful lot of the time and energy is spend analyzing the market type reasons for each language being popular. But by the time we get to supporting a thesis that OOP is less relevant, or not central, to things like C++ and Java, not enough time and energy to really bring it home. And I'm not sure I'm convinced, although it is an excellent theory.

Here's what I think: The dichotomy exposed in essays like https://blog.codinghorror.com/separating-programming-sheep-f... and The Camel has Two Humps: http://eis.mdx.ac.uk/research/PhDArea/saeed/paper1.pdf, is real. I see it every day.

I think there is another dichotomy. Many people who can code, can think the way needed to write software (not just fumbling and guessing), and can understand a structural and procedural way of thinking. But a large proportion just cannot understand functional programming. "Poof!". I don't think it's education. I think it's a lack of a particular and obscure innate mathematical ability. And I don't think it's a defect either.

Peoples brains are wired different. Visual vs. verbal. Well FP, certain types of math (say number theory), temporal thinking (threads, race conditions, liveness proofs), are very different kinds of thinking than the basic "recipe-like" procedural coding.

So FP isn't the norm because out of 100 people, perhaps 5 can program. Out of that 5, perhaps 1 really "gets" FP. Maybe one more can force it and get it done.

My experience is watching co-workers try to maintain and manage a simple tool written in a functional language. It made perfect sense to me, but it was a disaster. Folks, some things that are easy-ish for you are just not easy for others. It's not "smart" vs "not-smart". Understand, not everyone is wired up the same way.

And the ability to really get FP is one of those things that is neither easy nor common.

oska
In case you didn't see it, this video was re-submitted and a much longer discussion thread ensued:

https://news.ycombinator.com/item?id=21280429

Oct 15, 2019 · 2 points, 0 comments · submitted by mikece
Oct 12, 2019 · 2 points, 0 comments · submitted by gyre007
Oct 10, 2019 · 2 points, 0 comments · submitted by emerongi
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.