HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Keynote: Guido Van Rossum

Next Day Video · Youtube · 7 HN points · 4 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Next Day Video's video "Keynote: Guido Van Rossum".
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Reminded me of this bit from Guido van Rossum's PyCon 2012 keynote:

https://youtu.be/EBRMq2Ioxsc?t=1374

"I know my operating system kernel will never be proven correct."

kmicklas
Of course he believes that, he's the creator of a language that doesn't even have a type system...
olewhalehunter
well it kind of does but idiomatic python is not very functional
eru
What does that have to do with functional programming?

(I know Python3 has some optional gradual typing. And I know that Python ain't very functional---especially if you go by how Guido wants Python to be written. But I still don't see the connection---apart from that functional programming and academic theory on typing go hand-in-hand.)

Retra
It does, he just didn't want to attach it very strongly to his naming system
fulafel
He is very probably right.
nickpsecurity
There's two assertions:

1. They're not proving anything relevant to current software development.

That's a joke to anyone following formal verification. Key protocols, microkernels, hypervisors, safe concurrency, CompCert compiler, part of NaCl, Microsoft's static verification for device drivers... the list gets bigger every year. Quite a few are already practical.

2. I know my operating system will never be proven correct.

That's probably true if he uses one not designed for verification. The high-assurance field gave up on UNIX after prototypes like UCLA Secure UNIX and even Mach work showed it was inherently too complex and leak-prone to secure in a high-assurance way. They moved to security kernels, separation kernels, language runtimes, trustworthy hardware, and so on. Each time, tried to make the trusted part as correct and secure as possible. If they couldn't, a recovery option was added.

So, we have numerous secure and high-uptime systems deployed whose methods were published even if source wasn't. He or others can always build on these if they want to replicate the results. His OS will at least be secure against most attacks, quite reliable in practice, and most errors recoverable with the rest not leading to data loss if good backup/restore plan. Good enough for me. :)

Blaisorblade0
> So, we have numerous secure and high-uptime systems deployed whose methods were published even if source wasn't. He or others can always build on these if they want to replicate the results.

I know that's technically true, but not helpful, and that's why this line of thinking is becoming unfashionable.

As a researcher in Programming Languages, who publishes and has reviewed code: By that argument, there's no need to publish CompCert or Coq, which seems ridiculous. Publishing code can help adoption even if the code gets rewritten.

Publications without code often fail replication for many reasons (not necessarily involving foul play or even errors). I can't document all tricks needed to get the performance results in 20 pages. And replication in another scenario might show the results don't generalize easily (and nobody knew). So having sources provides the extra confidence you need before bothering.

And compared to other publications without code, the needed expertise seems much less common.

OTOH, papers do get used by the research community, possibly steering it in some direction (including then wrong one). And a few papers matter enough that people will attempt replication anyway (and learn if they work).

When the results of a paper are "here's what we tried and got, new ideas are needed" (honestly, many papers on hard problems) then maybe sources don't matter.

nickpsecurity
It looks like you're making a good argument for a different conversation or tangent to this one. The comment you replied to was aimed at Guido van Rossum's claim up-thread that formal verification was useless and couldn't help his OS. I countered that claim. You're now saying techniques in published papers aren't beneficial without source. I'll address that since you took time to write it.

Actually, let's look at not publishing CompCert's source as an example. Under your theory, I can't learn anything at all from just the paper or binary. Here's what I think I can learn from it:

1. They claim they verified most of C language in a compiler down to the assembly in Coq. John Regehr's testing confirms it has the correctness in terms of defects detected that one would expect in a verified compiler. So, now I know (a) they did do a verified compiler for C, (b) the methods they describe might help other formal developers in their verifying compiler work (give ideas at minimum), (c) a successful verification of a C compiler clears the way for more projects/funding on C or other compilers, and (d) anyone needing a verified C compiler has a supplier, AbsInt, whose work was rigorously evaluated.

2. Building on former, the fact that the compiler will produce correct output more than most compilers means it can be used to check correctness of highly-optimizing compilers. Automated testing can be directed at an optimizing compiler and CompCert where the output of programs run through each is checked for equivalence. If not equal, probably a bug in the unverified one but maybe CompCert itself. Likewise, I considered re-running a pile of test cases for a C program that were showing bugs through CompCert version of that same program. Any bugs disappearing might mean those were compiler bugs. Help narrow things down.

3. A key method for both CompCert and CakeML was using a series of intermediate languages working from highest to lowest level w/ equivalence checks on each pass. People doing formal or informal methods for developing compilers can then apply that technique, maybe even those DSL's, to see if it improves their own work. They might start creating their own DSL's. If Racket and Red are any hint, just giving the ability to do DSL's with worked examples can lead to productivity or correctness improvements in many types of software.

Let's look at another example from high-assurance security. One of the demonstrators was closed-source GEMSOS. Its design and assurance methods are described here:

http://www.cse.psu.edu/~trj1/cse443-s12/docs/ch6.pdf

I used such papers to develop solutions that were highly resistant to penetration because the techniques supporting them naturally lead to that. Especially little code in TCB, using FSM's to brute-force the analysis, input validation, avoiding pointers much as possible, safer language, and so on. Difference between popular/mainstream INFOSEC and what I learned from INFOSEC's founders in high-assurance was mind-blowing. Common example is covert-channel analaysis of OS's is almost non-existent but Kemmerer's paper tells you how (Shared Resource Matrix). Likewise, highly-secure repo's are uncommon but Wheeler's survey (SCM Security) tells you how to fix it. Heck, I even learned to make sure my stuff could be formally verified later by just reading what was easy, moderately difficult, or hard in formal methods papers. I called Design for Verification which I later found was similar to how hardware engineers do things (i.e. DFT, DFM). I learned that from reading on Cleanroom methodology where they verified simplified code in box structures by eye before building for test runs.

So, I reject based on worked examples the claim that it's "not helpful" to have papers w/out source code that describe worked examples in enough detail to replicate for critical review or use for new project. It's been very helpful to all kinds of people. I also agree with you that there's many reasons to prefer source and/or data used in research over black boxes. It's good they published the source of CompCert. Even better that CakeML is fully open versus CompCert's restrictions. As I hinted at in 1, other teams started using CompCert and CakeML as backend infrastructure for their own work on other languages. Them being commercial w/ free use for researchers would probably have reduced adoption but I bet there still would be some if their DSL's were published. People could still do useful things, push state-of-the-art, and publish the work with much fan fare.

So, still benefit even if not maximal benefit to have detailed papers with no source.

Guido brought up Python functional programming during the 2012 PyCon keynote: http://www.youtube.com/watch?v=EBRMq2Ioxsc#t=44m25s
Mar 16, 2012 · 1 points, 0 comments · submitted by plessthanpt05
Mar 15, 2012 · 1 points, 0 comments · submitted by cr4zy
Mar 15, 2012 · 5 points, 0 comments · submitted by spdy
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.