HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Simon Peyton-Jones: Escape from the ivory tower: the Haskell journey

Churchill College, University of Cambridge · Youtube · 8 HN points · 9 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Churchill College, University of Cambridge's video "Simon Peyton-Jones: Escape from the ivory tower: the Haskell journey".
Youtube Summary
Churchill College's annual Computer Science lecture.

In this talk Simon discusses Haskell’s birth and evolution, including some of the research and engineering challenges he faced in design and implementation. Focusing particularly on the ideas that have turned out, in retrospect, to be most important and influential, as well as sketching some current developments and making some wild guesses about the future.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Dec 01, 2021 · vanderZwan on Julia 1.7 Highlights
That's because Haskell is like the programming language equivalent of The Gig That Changed The World[0], although I guess Algol60 also has a strong claim to that title.

Let me explain by quoting these paragraphs Roger Ebert's review[1] of 24h party people:

> As the film opens, Wilson is attending the first, legendary Sex Pistols concert in Manchester, England. (...) Wilson is transfixed by the Pistols as they sing "Anarchy in the U.K." and sneer at British tradition. He tells the camera that everyone in the audience will leave the room transformed and inspired, and then the camera pans to show a total of 42 people, two or three of them half-heartedly dancing in the aisles.

Sounds like the average language designer entranced and inspired by their first time grokking Haskell.

> Wilson features the Pistols and other bands on his Manchester TV show. Because of a ban by London TV, his show becomes the only venue for punk rock. Turns out he was right about the Pistols. They let loose something that changed rock music. And they did it in the only way that Wilson could respect, by thoroughly screwing up everything they did, and ending in bankruptcy and failure, followed by Sid Vicious' spectacular murder-suicide flameout. The Sex Pistols became successful because they failed; if they had succeeded, they would have sold out, or become diluted or commercial. I saw Johnny Rotten a few years ago at Sundance, still failing, and it made me feel proud of him.

I could rephrase that last sentence at "I checked out a contalk by Simon Peyton Jones from a few years ago, still avoiding success at all costs[2], and it made me feel proud of him" and it would be absolutely true.

And no, I also did not expect to find a parallel between Haskell and punk music, but there you go.

[0] https://openculture.com/2015/06/the-sex-pistols-1976-manches...

[1] https://www.rogerebert.com/reviews/24-hour-party-people-2002

[2] https://www.youtube.com/watch?v=re96UgMk6GQ&t=1372s

Gatsky
This is the best HN comment I have seen for quite some time.
vanderZwan
If you watch the linked SPJ video, I timestamped it at the moment where he explains what he means with "avoiding success at all costs", but also: if you watch a bit longer he shares the story of a GHC compiler bug where if you compiled a module on windows, and in a different directory than the current directory, and it contained a type error... it would tell you there was a type error, and then delete the source file.

And instead of people going berserk over this, all he got was some friendly emails from people who just wanted to inform him they bumped into this problem, but it's fine, they had a workaround. Because Haskell people in those days were used to Breaking changes with a capital B.

I think that's at least a little bit punk

plafl
> it would tell you there was a type error, and then delete the source file

Which makes all the sense because Haskell is about avoiding bugs at compile time

sireat
I wonder what language would be considered post-punk?

Scala (experimenting with various styles) comes to mind.

How about proto punk? Something like ML?

Aug 05, 2020 · 3 points, 0 comments · submitted by pietroppeter
Yes, I didn't include the papers below in my parent response because, “just read these dense academic papers to understand what monads are and why Haskell has them” doesn't tend to be good advice for Haskell beginners. But learning why monads came to be included in Haskell is useful if you want to dive deeper into this stuff:

Philip Wadler's original paper on Comprehending Monads [PDF]: https://ncatlab.org/nlab/files/WadlerMonads.pdf

Philip Wadler and Simon Peyton-Jones' paper on Imperative Functional Programming [PDF]: https://www.microsoft.com/en-us/research/wp-content/uploads/...

Simon Peyton-Jones mentions both papers and talks about why monads came to be part of Haskell in his talk on the history of Haskell here, with examples that are a little more accessible than the above papers: https://www.youtube.com/watch?v=re96UgMk6GQ [introduction to purity and then monads starts around 30:07]

It's a funny talk and it's worth watching in full, but here's a nice soundbite:

“So what did we do for IO? Well, we just didn't have any. [audience laughs] So the joy of being an academic, right, is you can design a language — in 1990, I beg to inform you — that had no input/output. A Haskell program was simply a function from string to string. That was what it was. … But this was a bit embarrassing…”

He goes on to talk about other ideas they explored to create effects, why they settled on monads, and why he wishes now they had called them something like “workflows” (as F# later did[1]) to make them sound less intimidating.

(Simon and Philip will both be at Haskell Exchange 2019 in London this coming week if anyone else, like me, enjoys spending two days as the dumbest person in the room: https://skillsmatter.com/conferences/11741-haskell-exchange-... )

[1]: https://blogs.msdn.microsoft.com/doriancorompt/2012/05/25/7-...

> I'm from Microsoft and when you see Microsoft documentation it often says, you know, x y or z is a rich something, right. In this case Haskell - or ghc's version of haskell is a rich language. What does this mean? Sounds good, doesn't it? But it always means this, right. That it's a large, complex, poorly understood, ill documented thing that nobody understands completely.

From a great talk by Simon Peyton Jones https://youtu.be/re96UgMk6GQ

Tarean
Linked the wrong talk, oups https://www.youtube.com/watch?v=uR_VzYxvbxg
Jul 13, 2018 · 2 points, 0 comments · submitted by tpush
Unsound typesystems are not necessarily a problem, but consider the problem the other way around: given a sound typesystem, how do we improve this in such a way that a) the average Joe will understand it b) it decreases the overhead of writing the types c) the type system helps the programmer write better programs

Haskells approach to this keeps the constraint of soundness: https://youtu.be/re96UgMk6GQ?t=52m5s

The whole video is very interesting even if you know nothing about Haskell, but are just interested in language design in general.

To summarize the part of the video I just linked: Their approach so far has been to try and design the type system in such a way that you reduce programmer pain while not sacrificing soundness at all. It is a painful constraint for language designers, but it allows you to gradually add functionality to the type system from observing real world examples. The goal can be described as: Design a typesystem such that every working program can be typed in both a sound and correct way, while minimizing the amount of programs that can be typed, but are not working as intended.

Aug 23, 2017 · Ace17 on D as a Better C
That's not so simple, because large adoption at a too early stage causes the language definition to freeze before it reaches maturity.

Simon Peyton-Jones from the Haskell community explaining exactly that: "What happens if [a language] become successful too quickly? You can't change anything!" https://www.youtube.com/watch?v=re96UgMk6GQ&t=1395

vram22
Is this the same as what I've read of as the (humorous) Haskellers' motto: Anything but success! or Avoid success at all costs? or something like that? :) Meaning maybe that they do not want to dilute the purity of their language for industry success?

Edit: A quick google finds this, which is enlightening:

https://news.ycombinator.com/item?id=12056169

The thing that's interesting about Haskell is that lazyness had a payoff of leading to the discovery of monad-based IO [1].

[1] https://youtu.be/re96UgMk6GQ?t=31m22s

SPJ also noted that the original success of haskell was due to their pursuit of laziness. Because laziness has no place in a system with side effects, their pursuit of laziness and functional programming is what led them to discover the usefulness of monads for describing it in the first place. If they were pursuing a strict language instead, then they were allowed to take shortcuts, because sneaking in side effects is a lot easier. YEs maybe the next haskell will be strict, but getting to the point to make that decision required laziness.

SPJ made a great lecture about this. It is very easy to follow and a fun insight into the academic history of modern functional languages https://www.youtube.com/watch?v=re96UgMk6GQ

May 29, 2017 · pron on Principles of TLA+
> An example is not being able to prove the termination of executing arbitrary programs

The theorem states that you cannot prove a program's termination any faster than running it. You're not violating any theorems.

> So, Im wanting it peer reviewed.

But it has been peer reviewed. Here is the publication http://www.sciencedirect.com/science/article/pii/S0022000005... in this journal https://en.wikipedia.org/wiki/Journal_of_Computer_and_System...

> So, such methods should allow us to achieve the same amount of complexity going upward using same abstraction and shortcut techniques.

First, I think you're misinterpreting (and exaggerating) what has actually been achieved. Looking at Hyper-V[1], 30KLOC of (preexisting and tested) C code have been verified in a year (out of a total of just 100KLOC), but only function-by-function, rather than global correctness properties. They wrote some global invariants, but didn't verify them. As I understand, such similar verification of various layers has been done, but no global end-to-end work. Whether low-level or high-level, there has been no composition across abstraction layers.

Second, such methods would not allow us to go "upward", because there's a reason why people concentrate on the lower level. The state space is smaller there.

Finally, I think you're also misinterpreting DeepSpec's work. It is at the forefront of verification of a certain kind, but if you look at their papers, when they talk about end-to-end verification, they're discussing programs that are 3KLOC C. These programs are smaller than many JavaScript libraries. Also, Coq is not really a tool engineers can put to everyday use. This is what Simon Pyton-Jones, one of Haskell's designers, has to say about Coq and Isabelle: https://youtu.be/re96UgMk6GQ?t=50m56s

Formal methods are starting to work, but we are roughly in the same stage as aviation in WWI, while you are talking about "using the same methods" to get a manned craft to Mars. We may well get there, but it will take a while, and the methods will not be exactly the same.

[1]: http://d3s.mff.cuni.cz/research/seminar/download/2010-02-23-... and https://link.springer.com/chapter/10.1007/978-3-642-05089-3_...

nickpsecurity
"The theorem states that you cannot prove a program's termination any faster than running it. You're not violating any theorems."

Someone explained it poorly to me then. Sounds like I'm still countering it, though, since my program that terminates it can move faster than the program it's terminating. I can have an infinite loop load and begin with a downward counter interrupting it one or more cycles after it begins. I'm still using two programs and context change to mitigate the artificial problem posed by the theorem using just one program looking at another.

"But it has been peer reviewed. "

How many people looked at it? For how long? And what diversity of backgrounds? Peer review of something critical to me doesn't mean people that look at theorems look at it and say "Good one. Done now." It's a social process that should involve mathematically inclined, people who do applied math in that field, and non-mathematical people in same domain with a track record of breaking proofs in practice. On the latter, there's been a number of breaks in cryptography because the oversimplified, abstract, elegant proof missed key details. Peter Guttman also had a nice rebuttal of reliability of formal methods showing all the times formal specs and proofs were incorrect with it taking years or so for people to spot the errors or disprove it by executing a program.

So, I have no doubt it looks good to mathematicians. I just want more effort on it from aforementioned kinds of people before I rely on it. Preferably a few man-years of people trying to solidify or counter it.

"rather than global correctness properties. They wrote some global invariants, but didn't verify them."

See, this is where being a specialist can help. I'm a bit unclear here. Can you get implicit, global correctness if you decomposed your app into pieces who are each locally correct? And interface conditions of function calls are correct? Or is that inadequate to you? The literature was unclear to me on this point.

Note: Three functions per day? Maybe we need to apply these tools to deeply-embedded 8-bitters with bytes of RAM or ROM. Then, a programmer doing full-verified work will feel really productive after "filling up the hardware" in a month or two. First time in verification history for someone to feel that productive. ;)

"It is at the forefront of verification of a certain kind, but if you look at their papers, when they talk about end-to-end verification, they're discussing programs that are 3KLOC C. "

Oh yeah, they're small. The reason I bring them up is that they're at the forefront of composing pieces. I figured you might look at what they're doing on absraction and integration of proof to get a better idea of the composing correctness aspect of your research. Then others and I might learn from your observations.

"This is what Simon Pyton-Jones, one of Haskell's designers, has to say about Coq and Isabelle"

You're just not the guy to have at a verification party, Ron. You stay trying to depress us. :P I appreciate the link, though, since that's guy is a hilarious speaker. He says they start learning Coq, disappear for two years, get all their pain receptors destroyed, and emerge (paraphrased) as zealots bringing the Gospel of Coq to all programs. Lmao. I've seen this disappearance or pain process. It's why I'm still ignorant of the details due to not digging into the full, learning process. Life is already rough enough right now haha.

"but we are roughly in the same stage as aviation in WWI, while you are talking about "using the same methods" to get a manned craft to Mars."

Nah, I'm talking about doing a few, key components of the craft with formal methods with the rest build with lightweight, formal specifications, model-checking, test generation, and esp human review. Recovery procedures simple and verified formally if possible. That's my assurance recommendations for your Mars mission. I do like your WWI aviation comparison. It also fits with my style of figuring out just how much I can cheat around the limitations to kick ass. Within your metaphor, the "Biafran Babies" is something that could've been done with WW1 avaiation.

http://www.cracked.com/article_19472_the-5-craziest-soldiers...

pron
> Sounds like I'm still countering it, though

If you think you could so easily circumvent mathematical theorems, why are you such a believer in formal methods? I don't understand exactly what your method is, but I can assure you you're not circumventing the halting theorem. There's a very precise definition of what it means to be faster. The halting theorem poses no artificial constraints on the problem; it's an extremely general theorem, and it certainly doesn't say how the decision must be made. You're perfectly allowed to serve as an OS and simulate the input program if you like, but that's not going to help.

> How many people looked at it? For how long? And what diversity of backgrounds?

The proof is very simple, and very similar to the proof that a system of cellular automata is Turing-complete: it points out that any nondeterministic Turing machine with a tape of length k cells can be simulated by k finite state machines. If it were possible to verify each component separately and combine the result efficiently, then any such TM could be verified, yet it is well known that it cannot be. The paper was written by well-respected academics, published in a respected journal, and has over twenty citations.

Now, the reason our response should be, "so verification is as hard as we thought", rather than "this is the end of the line", is because, like all results in computational complexity, it talks about the general case. It is possible that a large enough class of programs that are of interest are special cases that are easier to solve.

> Peter Guttman also had a nice rebuttal of reliability of formal methods showing all the times formal specs and proofs were incorrect with it taking years or so for people to spot the errors or disprove it by executing a program.

I don't know what you're referring to, but I'm not aware of any cases where mechanically checked proofs or model checker results have ever been found wrong. The spec can very well be wrong, but you're the one arguing the infallibility of formal methods.

> Preferably a few man-years of people trying to solidify or counter it.

No one is going to try and counter a mathematical theorem, with a very simple, well-understood proof.

> Can you get implicit, global correctness if you decomposed your app into pieces who are each locally correct?

I'm not sure what you mean by "implicit". A formal spec requires a formal proposition to check. Things can be true without us proving them, but formal verification is about verification, not about truth. Theoretically, programs could be correct without verifying or testing them in any way. As to building up a global correctness property end-to-end (from very low-level code and up) is possible by very hard, and no one has ever been able to do it for even medium-sized programs. Obviously, working in a modular way is how people do it; that's how both math and software works. Yet, it's very, very hard.

> Then others and I might learn from your observations.

I think you overestimate my abilities. I'm not an academic. Just a programmer with some experience using formal methods.

> You stay trying to depress us.

This is where you're wrong. Formal methods -- like aviation -- are awesome! But let's enjoy them for what they are, not what we imagine them to be. If you see someone speaking of formal verification as if it were a solved problem, and writing fully correct programs is just a matter of learning Idris, you can know one thing for certain: they've never actually used formal methods on non-trivial real-world programs. The FM experts are those lowering expectations the most. Watch some talks by Xavier Leroy.

> formal specifications, model-checking, test generation

All of these are formal methods (the last only if there's a formal contract).

> http://www.cracked.com/...

:)

May 14, 2017 · 3 points, 1 comments · submitted by mpweiher
codygman
Really great talk, as all I've watched by Simon have been.
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.