HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Ron Pressler - Loom: Bringing Lightweight Threads and Delimited Continuations to the JVM

Curry On! · Youtube · 3 HN points · 3 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Curry On!'s video "Ron Pressler - Loom: Bringing Lightweight Threads and Delimited Continuations to the JVM".
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Not only do I not dislike type theory vehemently, I really like some of its uses (like linear types; don't believe me? I praised them in a recent talk: https://youtu.be/r6P0_FDr53Q?t=553). I'm not too fond of deductive proofs as a software verification method because so far it manages to scale worse than other formal methods (that don't scale that well, either). What I strongly dislike is:

1. Fetishization of certain formalisms, particularly combined with a rather limited understanding of said formalisms or formal systems in general.

2. Appeals to "the power of math" (mathiness) when discussing the extremely complex disciplines of engineering, especially when combined with a very limited knowledge of math.

3. Claims of the sort, "I like A more than B, hence I conclude that A is the best thing in the world", especially when combined with little knowledge of C or D (or A and B, really).

And I don't like cults.

It just so happens that those qualities are common among those who claim to particularly like type theory or functional programming (or, at least, among those who publicly talk or write about them). I don't think it's accidental, either. I think that some luminaries in that field encourage that kind of cult-of-magical-mathiness behavior, so this sort of thinking has become a part of the ethos of that community.

willtim
That programmers are becoming more interested in maths and the mathematics of program construction can only be a good thing. As someone clearly interested in maths, I am surprised you seek to attack them as "cults". I just see enthusiasm, no different to how folks have enthused about object-oriented programming for decades.

As Brian Goetz (Java Chief Architect) has said, Java will never become Haskell, but it will continue to take ideas from it. So even Oracle is drumming up interest in functional programming. About time I say, industry has been neglecting ideas from functional programming for too long.

pron
> I am surprised you seek to attack them as "cults".

I don't call people who actually study and understand math as a cult. Those people don't usually make outlandish claims that "if only people applied some math magic dusts, writing correct programs would be easy," because they know the really difficult challenges (after all, it is complexity theory, a mathematical theory of sorts, that tells us that verifying programs is very, very hard). But that's not what I see in the FP community. I see mathiness, not math. I also see cultish adoration; the lambda calculus was discovered, not invented? Come on! It's not only wrong, but it's a kind of religious adoration. I call cults cult.

ferzul
oh I don't think that's fair. the view that mathsy things are discovered not invented is hugely widespread amongst many people who know what they're talking about. I don't know what I'm talking about, and I think such matters are invented not discovered. but I do think Haskell is a great language that will drastically improve the quality of maybe even any project that doesn't need c-like efficiency. these concepts are independent.
pron
Creating a formal system, an artificial language, is not a mathematical discovery and it has nothing to do with "math things." There are interesting "math things" one can discover about Python, but no one would take you seriously if you say Python was discovered rather than invented, and no one who knows the history of logic would take seriously any claim that the lambda calculus was discovered. The person who invented the lambda calculus said it was invented in the very paper he presented it, so no, the people who say it was discovered do not know what they're talking about, at least not on that matter.
willtim
Lambda calculus is a lot simpler than Python. It bares a striking similarity to Gentzens natural deduction rules and is essentially isomorphic to it. It is also very well studied and its properties are well known. Hence it makes an excellent basis for a programming language. The idea that Lambda calculus was discovered and not invented was recently entertained by Phil Wadler. Given his academic credentials, I think we can assume he knows what he is talking about.
pron
Why can we assume that? Is he an expert on the history of logic? That he's an expert in an adjacent field doesn't make him at all an expert on that. If you listen to his talks and compare with the actual texts he mentions you'll see that he's wrong.

And the similarity is not "striking" at all. It's by design. It is only striking if you don't know the actual history of how these formal systems developed. A couple of years ago I composed an entire anthology on the subject: https://pron.github.io/computation-logic-algebra

willtim
You are quick to accuse others (even a professor and ACM fellow) of being "wrong", but what exactly are your credentials?
macromagnon
I don't think you can dismiss a platonic view of mathematics as simple cultish adoration and wrong.

There's many problems with the platonic world of ideals and a long history of arguments and counter arguments, but definitely nothing is settled and most working mathematicians have an intuitive feeling of platonism and that there is a 'right' solution to be found out there.

pron
It has nothing to do with the Platonic view of mathematics, that I don't try or wish to dismiss at all. Even if the real numbers are real and, say, complex numbers were discovered, the lambda calculus was still very much invented. The lambda calculus is a formal system, an artificial language, first described in a paper in which the author, Alonzo Church, talks about the arbitrariness and aesthetics involved in inventing a formal system, and ironically, specifically talks about the process of invention in that context: https://news.ycombinator.com/item?id=20699465

So claiming that the lambda calculus was invented is 1. factually wrong, 2. fosters a deep misunderstanding of what formal systems are, 3. encourages a mystical view of mathematics. So it is at once wrong and mystical, and so I find it cultish.

Note that when, say, discussing the use of complex analysis researchers don't usually spend time on the claim that complex numbers were discovered. But some FP luminaries do, as part of their (perhaps unintended) mission to instill awe and admiration toward their favorite programming style at the expense of understanding.

discreteevent
Your points reminded me of this:

A naturalist account of the limited, and hence reasonable, effectiveness of mathematics in physics

Lee Smolin

https://arxiv.org/pdf/1506.03733

Also the book on which the article is based "The Singular Universe and the Reality of Time" is a physics book but I thought it has huge relevance to programming live systems (as opposed to e.g. compilers). The big difference between what we have to do when programming rather than when we are doing mathematics is dealing with time. (But that's what makes it so interesting - and frustrating)

pron
As someone who uses mathematics for programming (just look at my blog: https://pron.github.io/) I am very much in favor. What I'm against is people who don't understand math, and promote a magical view of it (not math but "mathiness"), as they don't realize its strengths and challenges. That's what I find annoying in the FP community.
Ericson2314
There is more cult than cult leader. The people who enjoy functional programming find it vastly enjoyable/better/whatever. I fully admit it's hard to convey this and hard to demonstrate this empirically because the cost and scale of an adequate experiment would be prohibitive.

The biggest thing I don't like about the model checkers and other strategies is:

1. No separation of proof and proof search. Deductive proofs are a good lingua franca. I want to mix and match techniques to derive the proofs and have them work together. Many of the things you like have potential to be repurposed as such "tactics".

2. Composition of verified programs. Functional programers really value composition and the things you like tend to be used for "whole progra verification". At the very least, while one can verify individual parts, there is no mechanized way to combine those proofs.

I also think the evidence already points away from your scale argument. To put it simply, right now Agda is good for toy stuff, things you like are good for medium stuff, but only Coq and Isabelle/HOL is used for the biggest projects like Cake ML and Comp Cert. Both of those are proof theoretic tools.

nosianu
> The people who enjoy functional programming find it vastly enjoyable/better/whatever.

That's a tautology...

Anyway.... I think I "get" enough of FP and I use parts of it. I certainly find it very useful. Types as well, very useful for me.

However, headlines like the current one and comments along the same hype line are just annoying. I sure like practical people discussing these things, but reality in this universe is that if you actually want to do something in the real world there is no abstraction that's even close to complete. Everything leaks like a sieve. You compensate by choosing and mixing your tools and by adapting on the fly. Theory is a tool, nothing more. "I'm a functional programmer" - okay, I'm merely a "programmer", and I choose whatever works best and don't fuzz about labels and purity.

The points you mention all work in parts, and disintegrate in other parts when they come into contact with a real problem plus its context.

These discussions are so... random and often emotional and committed because there is no context to measure against. It's just words. If there is a concrete problem including the concrete context (the entire environment and business context) it's much easier to agree on something. Without that, every commenter comes in with their own context vision in their brain, which depends on what they've been doing. In the end any non-trivial piece of engineering (software or not) is much more messy than any theory can account for.

Exactly what @pron said in the comment you replied to about the "Appeals to "the power of math"" akin to a believe in magical solutions. I find it extremely pretentious TBO, although I certainly value math very highly. But it's a tool with severe limitations and not a magical instrument.

All this stuff - math, type systems, FP - appear to us so "pure" and as the perfect solution because we invented it and (also because of that) it is a great match for how our brain works: Which means disregarding 99% of reality (and our sensors are very limited to begin with). But whenever we try to apply a thus derived "pure" solution to the real world we find that the disregarded and filtered-out aspects start appearing in all corners. If the person looking at it also has the same bias they concentrate on only the part that was successfully modeled and think "this worked perfectly", so in evaluating the solution you have the same filter again. But whenever you try to do something more complex and/or more complete with math it's a major undertaking, taking many very good people a long time. So yes, math can be used to model more aspects, but that results in an explosion of complexity.

The closer you program on "math space" problems the more perfect the mathy solutions like FP and type systems will fit. The more the real world has to be dealt with the more the leaks in the abstractions will become apparent. You can try to deal with them with the "mathy" solutions to remain in that space, but that then becomes more and more complex, and at some point you end up with a "hack" to cut through the thicket.

I highly recommend learning the basics of (cell-level and below) biology and biochemistry instead of yet another programming language to see a very different system. Then it becomes apparent the differences between various programming tools are much less than the heated discussion make it seem. When you look with a magnifier glass tiny differences look like mountains. FP, OOP, procedural programming, whatever - it's all not all that different really, in comparison. It should be more apparent when you remember it all runs on the exact kind of hardware architecture. Our computing hardware is not diverse at all really (quantum computing as a very early and very limited experiment being a noteworthy exception; as are mostly older analog computing systems). The layers on top are all closely bound by and to it.

Ericson2314
> That's a tautology..

It almost is, except for "vastly". I'm saying there isn't very many people who enjoy it but find it in their experience merely slightly better.

Ericson2314
There's loads to respond to here, but let me just say

> Everything leaks like a sieve.

Not in my experience. I, for work, use 100% Haskell (well, and some SQL) mainly to write web apps. Not "math space" problems by any means. We use and have written many abstractions. None of them are particularly leaky. Some of have implementations I don't thoroughly understand and yet have no trouble using anyways---I no no better way to demonstrate a lack of leaks than that.

pron
> I fully admit it's hard to convey this and hard to demonstrate this empirically because the cost and scale of an adequate experiment would be prohibitive.

Ah, but you see, here we have a problem. If you claim that your favorite technique has a big measurable impact on software development costs, then observing them shouldn't be hard at all, even without careful empirical studies. There are many problems with free markets, but applying selection pressures that quickly pick adaptive techniques is something markets are actually good at. And if your claim is that the benefit is not easily measurable because it's not so large and decisive, then why is your claim important at all?

And, BTW, I think the preference for a programming style is very personal. I learned and used FP before I learned OOP, and I like both (maybe for different things), but I can't say I have such a strong preference for either. They're both OK.

> but only Coq and Isabelle/HOL is used for the biggest projects like Cake ML and Comp Cert. Both of those are proof theoretic tools.

Coq and Isabelle have so far only been used to verify deep properties of tiny programs, up to ~10KLOC; that's 1/5 of jQuery. Even then the effort was huge. Model checkers have been used on much larger programs, and, at least for small programs, they're commonly and regularly used in industry (especially in safety-critical hard realtime systems).

lmm
> If you claim that your favorite technique has a big measurable impact on software development costs, then observing them shouldn't be hard at all, even without careful empirical studies. There are many problems with free markets, but applying selection pressures that quickly pick adaptive techniques is something markets are actually good at.

I'm not convinced this is true in an environment where every problem is unique and failure rates are already high - humans are known to be very bad at working with low probabilities. Even if we're talking about an order-of-magnitude difference, who could distinguish between a startup - or an internal new project - with a 1.5% success rate versus a 15% success rate?

That goes doubly when we're talking about costs: a friend who'd worked for several investment banks observed that they all had roughly the same number of developers overall, but the distribution of those developers was radically different: bank X would have 5 programmers working in area A and 50 programmers working in area B, while bank Y would have the opposite, even though both were solving the same problem in area A and area B. Could you tell whether Google or Facebook has 500 or 5000 or 50000 developers from the outside? Does Uber have 50 or 500 or 5000? (I genuinely don't know).

To my mind the most compelling case: WhatsApp apparently did succeed with an order of magnitude fewer developers than comparable companies - and the industry responded with a collective shrug. (My interpretation is that the industry is still so immature - or so fast-moving - that an order of magnitude cost saving still doesn't matter; software produced at 10x what it "should" cost can still be amply productive/profitable, it's easier to increase profits by increasing revenue than by cutting costs).

pron
> who could distinguish between a startup - or an internal new project - with a 1.5% success rate versus a 15% success rate?

But if we can't distinguish the results, do they really matter? After all, the whole point of changing how we work is to get a measurably better result. If it's not better, why change? (of course, assuming there's a positive effect at all). Something cannot be at once very important and unnoticeable.

> To my mind the most compelling case: WhatsApp apparently did succeed with an order of magnitude fewer developers than comparable companies - and the industry responded with a collective shrug.

A shrug? You can draw a straight line from that to the work I'm doing now in OpenJDK, at the very heart of the mainstream. I know that because WhatsApp's technical success is partly how I sold the idea. Similar efforts are taking place in most other mainstream software platforms. We very much took notice, and we were very much convinced, and that's why we're doing what we're doing. And the contact we've had with the world's biggest software companies and software-dependent companies (including your own employer) about this work shows us that they've taken notice as well.

lmm
> But if we can't distinguish the results, do they really matter? After all, the whole point of changing how we work is to get a measurably better result. If it's not better, why change? (of course, assuming there's a positive effect at all). Something cannot be at once very important and unnoticeable.

It can be at once very important and poorly understood. I'd draw an analogy to the state of mathematical analysis in the era of Riemann and Cauchy, where no rigorous basis for the techniques existed; skilled practitioners could and did produce correct theorems, while unskilled students applied the same techniques and produced nonsense. So even when there was ultimately an underlying rigorous reality (as we found out decades later), in practice being able to work effectively was far more a matter of individual flair and taste.

I feel like that's where we are with software development practices at the moment. We all know there's a huge variation in productivity, but to even measure them - much less explain the causal factors - is still a black art. (I'd think the same would be true of artists or research scientists - anyone whose work is fundamentally about original innovation. I'm sure there are techniques and approaches that make an order-of-magnitude difference in productivity, but no agreement about what they are)

pron
Perhaps I'm just more incredulous. Many more people are just as sure that homeopathy works and that vaccines cause autism, so I'd just as soon assume it's psychology at work. But I think what you're saying is that we don't yet have beneficial techniques just ideas that could lead to them. I can't argue with that because I have no information one way or another. All I'm saying is, when someone does figure out how to develop software considerably better/faster, that achievement would be hard to hide.
lmm
I suspect that the effective techniques are known (or "known") to some practitioners, but buried among a pile of ineffective techniques, with different people swearing by different techniques based on their experiences. Similar to how traditional medicine can be a mixture of genuinely important treatments for local medical circumstances and completely ineffective treatments that happened to be (spuriously) correlated with good outcomes.

You're quite right about how it looks from the outside; the possibility that I'm just experiencing my own set of spurious correlations is real and worrying. But ultimately there's only so far that I'm willing to overrule my own lived experience.

I wish I shared your confidence about future progress. There have been good and bad managers, and good and bad communicators, for decades, but we don't seem much closer to a generally understood method that can be applied by everyone.

Aug 12, 2019 · pron on Await in Rust
I completely agree. In Java we're taking a different approach: https://youtu.be/r6P0_FDr53Q (my talk also explains why this option may not be available to C/C++/Rust)
I think it's important to separate the issue of what monads are/how they're used from the question when they should be used at all. While monads are very useful for working with various streams/sequences even in imperative languages, they are used in Haskell for what amounts for effects in pure-FP, and that use ("Primise" in the article) has a much better alternative in imperative languages. Arguably, it has a better alternative even in pure-FP languages (linear types).

Here's a recent talk I gave on the subject: https://youtu.be/r6P0_FDr53Q

Jul 25, 2019 · 2 points, 0 comments · submitted by tosh
Jul 24, 2019 · 1 points, 0 comments · submitted by tosh
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.