Hacker News Comments on
William Byrd on "The Most Beautiful Program Ever Written" [PWL NYC]
PapersWeLove
·
Youtube
·
59
HN points
·
16
HN comments
- This course is unranked · view top recommended courses
Hacker News Stories and Comments
All the comments and stories posted to Hacker News that reference this video.First heard about program synthesis myself thanks to William Byrd's demo at the end of "The Most Beautiful Program[...]", https://youtu.be/OyfBQmvr2Hc @ 1:17:15
Watch this (skip to 1:17:28 to see a quick demo, the rest of the video is the explanation):It’s exploiting logic programming to query for a program that satisfies some condition.
⬐ spikejBut take a step higher and you get this https://youtu.be/FvNRlE4E9QQ
That is "The Most Beautiful Program Ever Written" - issue. Because I am stupid, I have to think very hard. Are we defining a language using the language itself or what?
"The Most Beautiful Program Ever Written" should be in this list.
⬐ ek8vBelieve it or not, I was at that PWL meetup in 2017! LISP and metacircular evaluation were definitely among the things that inspired the list. The "Micro-Manual for LISP" mentioned in the talk is on it.
I can't recommend the talk Dr Byrd did on this enough:https://www.youtube.com/watch?v=OyfBQmvr2Hc
It's one of my favorite programs to play around with. I wrote about an Erlang implementation of this a short while back:
https://thingstoreadabout.substack.com/p/lisp-in-seven-parts
One of my all time favorite YouTube videos is a lecture by William Byrd. I find myself still coming back to this video, it was so fascinating to me as an undergrad. https://youtu.be/OyfBQmvr2Hc
⬐ jmkrI think this was the first Will Byrd video I saw, and what got me interested in Scheme, and PL.He has some great tutorials on Youtube too.
I still haven't dived too much into miniKanren, but it's on the list with the Reasoned Schemer.
⬐ kanzenryu2You might enjoy this amazing talk. I've linked to a part when the next 20 seconds of explanation turns the amazing up to 11. https://www.youtube.com/watch?v=er_lLvkklsk&t=1436s
Lisp is 10 lines of code. If this is not your starting point, you are already lost: https://youtu.be/OyfBQmvr2Hc
Equational reasoning makes TDD and property based testing trivial (encode rules by examples, base cases and corner cases, which is the most natural way for humans to think); rule-based system are easier to extend and correct by design (bounded set of possible states); provides more powerful and elegant query languages (E.g. Datalog).See https://youtu.be/OyfBQmvr2Hc (skip to 1h mark to see some examples)
If you haven't watched it - check out William Byrd on "The Most Beautiful Program Ever Written"https://www.youtube.com/watch?v=OyfBQmvr2Hc
It's the above and deriving it. It is fascinating to think of it as the essence of a language in 13 lines.
There's an amazing talk about it: https://www.youtube.com/watch?v=OyfBQmvr2Hc(define eval-expr (lambda (expr env) (pmatch expr [`,x (guard (symbol? x)) (env x)] [`(lambda (,x) ,body) (lambda (arg) (eval-expr body (lambda (y) (if (eq? x y) arg (env y)))))] [`(,rator ,rand) ((eval-expr rator env) (eval-expr rand env))])))
⬐ lioetersThis is exactly what came to mind when seeing the question.I was just recently (re-)reading an article that goes in depth:
Lisp as the Maxwell’s equations of software
http://www.michaelnielsen.org/ddi/lisp-as-the-maxwells-equat...
⬐ chombierOnce you've seen this one, it's difficult not to rank it first :)⬐ quickthrower2Y Combinator?⬐ estomagordoGreat. In 90 minutes I may be able to know what you're talking about.⬐ estomagordoIs it a complete lisp interpreter, written in lisp?⬐ _0ffhNah, I'd guess it's the metacircular evaluator, but don't take my word for it. I'm too tired to care to parse it in detail right now.⬐ GlobzYes, it uses pmatch behind the scene to match your expr against the evaluator.https://github.com/webyrd/quines/blob/master/pmatch.scm
I have to agree, this like rank #1 in my book!
As for what miniKanren is, there are a couple of books (one being Byrd's dissertation) and a lot of papers. I think maybe the best summary I've read is actually the interview transcript at https://www.infoq.com/interviews/byrd-relational-programming.... But also, there's a summary on http://minikanren.org/, with links to the papers and talks, and an older summary on https://kanren.sf.net/. Most of these are a bit hard to understand the significance of if you aren't already familiar with logic programming. The talk on https://www.youtube.com/watch?v=OyfBQmvr2Hc is a popular description of one of the most astonishing results; I haven't watched it but people tell me it's good.As for understanding academia, yeah, I don't know.
I watched a fascinating presentation and when I saw the speaker typing (lambda ...) and having it change to (λ ...) and not having to deal with parens.. I was jealous.Does emacs do this?
EDIT: the presentation:
William Byrd on "The Most Beautiful Program Ever Written"
⬐ dTalSome languages understand λ directly (and in all lisps that support unicode, you can explicitly alias it) - I've occasionally mapped AltGr+L to λ, when playing with those. It's really nice.APL perhaps goes a little far, but I'd like to see more standardized symbols in programming languages. Imagine how nice it would be to succinctly use things like the actual outer-product operator ⊗...
⬐ m463⬐ vindarel> APL perhaps goes a little farWhen I learned APL long ago in a languages course, it was on a dedicated APL setup with an APL keyboard.
In that context, it was a "how SHOULD things be" sort of design decision. I know I started off quickly thinking in terms of math and the language, not composing symbols.
Of course, APL keyboards have gone the way of the dodo and so the language has a higher barrier to entry.
You can also use shorter lambda forms: https://github.com/CodyReichert/awesome-cl#lambda-shorthands⬐ Jim_HecklerYes, M-x prettify-symbols-mode, it's very cool. For parens, there's the built-in electric-pair-mode, paredit, smartparens, and lispy.
If you're interested in the work that led to this, definitely check out Kenichi Asai's reflective languages. The code for his Black language with reflective semantics is reproduced here: [0]More tangential but also cool: this talk[1] by William Byrd mentions reflective towers, but jumps into a discussion of what he terms "relational programming". It's a demo of his Barliman Smart editor system: [2]
[0] https://github.com/readevalprintlove/black
⬐ keithnzAlan Kay said :-https://queue.acm.org/detail.cfm?id=1039523(SF) If nothing else, Lisp was carefully defined in terms of Lisp. (AK) Yes, that was the big revelation to me when I was in graduate school—when I finally understood that the half page of code on the bottom of page 13 of the Lisp 1.5 manual was Lisp in itself. These were “Maxwell’s Equations of Software!” This is the whole world of programming in a few lines that I can put my hand over.
⬐ ngcc_hk⬐ kazinatorHard to read when quote like this.⬐ WalterGRPreformatted text on HN is really hard to read on mobile. Reformatted:(SF) If nothing else, Lisp was carefully defined in terms of Lisp.
(AK) Yes, that was the big revelation to me when I was in graduate school—when I finally understood that the half page of code on the bottom of page 13 of the Lisp 1.5 manual was Lisp in itself. These were “Maxwell’s Equations of Software!” This is the whole world of programming in a few lines that I can put my hand over.
Page 13 of that manual is page 21 of this PDF: http://www.softwarepreservation.org/projects/LISP/book/LISP%...
Lisp in Lisp interpreters actually have a marginal practical use. Suppose you've boostrapped a Lisp in some other language (assembly, C or whatever). You have an interpreter in that language, and have written a compiler for that Lisp in itself (say for a VM in that other language). You can now write an interpreter for that Lisp in itself. Once that interpreter behaves exactly like the original one in the other language, you can throw that one away. Bootstrapping can proceed by shipping the interpreter in compiled form. The benefit is that there is less language-processing code written in the other language.You could just throw the interpreter away and have a compiler only (bootstrapped with the compiled version of itself), but then it's useful to have two models of execution for programs.
⬐ bitwize⬐ getpostThis is how we got the first working Lisp implementations. John McCarthy specified Lisp in itself, and a graduate student translated it into IBM 704 assembly....Namely, an interpreter for lisp written in lisp. William Byrd learned about it from Daniel P. Friedman (The Little Lisper, The Little Schemer, ....)⬐ ncmncmAt great risk of being downvoted to oblivion (before you pile on, check your motives), this struck me on first encounter, 30 years ago, and ever more strongly since, as the most self-indulgent of all programs. People impressed by it invariably appear, primarily, pleased with themselves for the depth of perception they demonstrate by being impressed with it.Comparing it to Maxwell's equations sharpens this. The equations are deeply illuminative of fundametal processes of reality, summarizing comprehensively and precisely decades of exacting experimentation by the best physicists of the age. The speed of light (in vacuum, and in any medium) is simply derived from them. No detail of them could be deduced without abstraction from all the experience of all who came before.
How much work went into this program? Would any two people, who just knew Lisp, but no theory of computation, asked to write it, write it differently? How long could it take? An hour, a week, a month? How much insight would it take? How much lived experience does it represent?
Think it through: A Lisp interpreter in Lisp. It demonstrates that, if you are handed a Lisp runtime, as we are today, it is easy to make it execute Lisp programs. With small changes, it runs slightly different programs.
Should that impress us? Where did this Lisp runtime come from, and why would one expect it to be tuned for anything but running Lisp programs? If it were not good for running Lisp programs, what would it be good for?
The program relies, fundamentally, on all the science (and, frankly, engineering) that came before, but illuminates none of it. It is a fleck of foam that rides determinedly at the surface, belying the depths that support it. It illuminates nothing beyond itself.
⬐ kazinator> Where did this Lisp runtime come from, and why would one expect it to be tuned for anything but running Lisp programs?In this case, it's running a program which very itself easily runs Lisp programs given to it as a source code data structure. That we normally don't expect. For instance, in C language run-time, though it of course supports running compiled C programs, it doesn't provide a straightforward way for one of these programs to itself process C programs. A C program that interprets a decent subset of C is pretty complicated. It has to provide a number of data structures, of course a parser (because C programs don't have a data structural representation) and memory management for the parse time and run-time and so on. The C-processing C program can't even ask for a function by name so that it could resolve a function call out to the native run-time. This is possible with some operating system extensions (dynamic symbols). That just solves the name resolution problem: then there is the problem of dynamically constructing the argument list for a function call and executing it. This requires a library not supplied in the C language such as GNU ffcall, or libffi. Think about it: a C interpreter for C needs to use a FFI mechanism (first F == foreign!) to expose the C run time to the interpreted C, and that FFI is far from a standard language feature. Lisp just resolves a symbol and calls apply and that's it; nothing is foreign: no visas have to be issued, no passports are stamped.
⬐ ncmncmC is an interesting contrast. There is no "C runtime". There is just the target machine. A C interpreter in C has no advantage over one in any other language, because every detail of the source language of the interpreter is washed away in compilation, along with the compiler.In Lisp, everything is still there. "Resolves a symbol" relies on the runtime. "Calls apply" relies on the runtime. You don't need to write those, because you inherited them.
⬐ kazinatorIn fact, in a hosted conforming implementation of ISO C, there is a considerable run-time, but it doesn't have anything useful in it for building a low-effort C interpreter.apply is just a function in Lisp. The host Lisp itself doesn't (necessarily) use apply for calling functions. When an expression like (cons a b) in the host Lisp is processed, it's quite likely compiled into machine code or byte code that doesn't use apply.
The hosted interpreter uses apply because it operates at the host Lisp's run-time; with respect to that host, it's handling function calls by dynamically constructing lists of evaluated arguments and then having to solve the problem of applying a list of arguments to a function.
A C run-time could provide apply. Libraries exist for this like GNU ffcall, and libffi. They are low-level, hard to use and error-prone compared to apply.
A C run-time can provide a way to get the addresses of functions also; we can do this in modern POSIX environments with dlsym, in Win32 with GetProcAddress and such.
If we put together libffi and dlsym to call a function, and make the slightest mistake, we have corruption or a crash on our hands. Whereas apply is robust; we get diagnostics about too many or too few arguments and so on.
If ISO C provided these features, then we could have ISO C "C-in-C" meta-circular interpreters. They would contain a significant lines-of-code expenditure compared to Lisp-in-Lisp.
Well, I think lisper nailed it in the article: It's not just that the code is represented as data structured in linked-lists, it's that that data exposes the semantic structure of the code in ways that are more pragmatically useful (among other nice properties) than char strings.When you get into the LISP interpreter itself, which is just a linked-list structure describing how to process linked-list structures (like itself) as code, well... Here's William Byrd on "The Most Beautiful Program Ever Written" (ye LISP interpreter) https://www.youtube.com/watch?v=OyfBQmvr2Hc
William Byrd's work in program synthesis is an interesting take on this, where an IDE can, guided by test cases written for a function, actually auto-complete code itself whilst writing the function, or tell the programmer when they have something wrong by violating a test case. Of course it is impractical right now, but a good step nonetheless.. It (Barliman) is demoed near the end of this video: https://youtu.be/OyfBQmvr2Hc
⬐ mrkgnaoMagicHaskeller[0] (which seems to sadly be offline at present) is a similar project that infers Haskell functions from properties likeOther, older projects include Exference[1] and Djinn[2], in decreasing order of power.reverse "abcde" = "edcba"
Also, this is really similar to how Idris and Agda work, except they use expressive types to generate the code (using the Emacs modes) rather than test cases.
[0]: http://nautilus.cs.miyazaki-u.ac.jp/cgi-bin/MagicHaskeller.c...
⬐ zmonxThis is indeed an extremely interesting approach.It was pioneered in the context of logic programming and Prolog by Ehud Shapiro in his 1982 PhD thesis "Algorithmic Program Debugging". The thesis was published as an ACM Distinguished Dissertation by MIT press and is available online from:
http://cpsc.yale.edu/sites/default/files/files/tr237.pdf
Together with Leon Sterling, Ehud Shapiro later wrote a very important introductory Prolog book called "The Art of Prolog".
⬐ timonokoSpoiler: Pattern-matcher, which matches 3 forms: "X", "(lambda (X) Y)" and "(X Y)".Visible at 55:48.
Looks a lot like this:
⬐ agumonkeyMy first thought. I wonder how much logic programming is in suggest.el .. if it's full fledge then I'll be damned.⬐ wcummingsSounds like it's brute forcing elisp fns.⬐ agumonkeyWell prolog is a kind of brute force too⬐ zmonxA major attraction of (pure) Prolog is that you can easily reorder goals and use all-solutions predicates to try different search strategies. Further, constraints like dif/2 help considerably to prune the search space. So yes, it can be brute force, but is often better than that.⬐ agumonkeySure, but there's no magic here when you want to explore the space of all possibilities. I'm a prolog noob and barely new about dif⬐ zmonxYes I agree. Still, the key insight of constraints like dif/2 may also be applicable in this concrete case to limit the candidates already before and even during the search. It is this automatic pruning that makes logic programming with constraints so fast for many combinatorial applications. You are right of course that it may end up in brute force though, if there is nothing to prune.⬐ agumonkeyI have yet to read about CLP, I'm eager now.