Hacker News Comments on
A Sudoku Solver in APL
DyalogLtd
·
Youtube
·
28
HN points
·
13
HN comments
- This course is unranked · view top recommended courses
Hacker News Stories and Comments
All the comments and stories posted to Hacker News that reference this video.my favorite of his videos is the sudoku solver: https://www.youtube.com/watch?v=DmT80OseAGs
I've always liked this video, where one of the Dyalog developers effortlessly writes a Sudoku solver in APL:
A Sudoku Solver in APL https://www.youtube.com/watch?v=DmT80OseAGs
⬐ jodrellblankMore appropriately, in Prolog (by Markus Triska) https://youtube.com/watch?v=5KUdEZTu06o⬐ speedgooseVery nice, and I can understand this one.⬐ cghThanks for this link. I took a course on logic programming in school and it made a big impression. I found Prolog to be pretty mindblowing at the time and I'm happy to see it still is.
A Sudoku Solver in APL
> My point was that general programming usually looks more like writing the sorting algorithm than simply calling it. Thus, I would be more interested in seeing real examples of how the properties of J allow you to write algorithms more concisely or otherwise better.Ah - I see. Here[1] is quicksort in j. John scholes's videos are excellent and very illustrative, though they deal with apl rather than j; conway's game of life[2] and sudoku[3].
> The article showed how J solves some mathematical problems with built in functions or their built-in modifiers. But in general programming, you won't find a function that does exactly what you wanted, maybe with a little adverb. You'll usually have to write that function yourself, to describe how it works in terms of some of those pre-existing functions.
Right. This is true, but you generally stay much closer to the builtins in j than you do in other language.
> Note that your second example could be expressed as something like
> x.OrderBy(y => y[1])
> In C#, assuming x is int[N][M]. I understand that J's version can work with many other shapes of X, but that is hard to appreciate without seeing example of how that comes in handy when writing some other (hopefully well known) algorithm.
Right; you mentioned sort predicates, so I was giving an example of a way in which you could sort according to another relation. My point is: being able to change the layout of your data so easily can obviate sort predicates. (Though I do think a sort adverb taking a predicate would be cool.)
1. https://code.jsoftware.com/wiki/Essays/Quicksort
I'm very much a believer in concise notation (more than most people I've worked with), but the advantages of having a name that can be unambiguously spoken are large enough to outweigh a small constant factor.I don't believe this holds up to scrutiny. APL as a language was made to be spoken, and it seems to have succeeded in that. Alan Perlis briefly touched on that in a great paper,[1] and just about everyone who's worked with the language and its derivatives agrees that your statement isn't true.
The idea that APL and its derivatives can't be "read and talked about" is frankly ignoring reality by any stretch of the imagination.
For example, here's [2] [3] [4] some videos disproving your statement, the first two being APL, the last being k itself.
For extra bonus points, see Aaron Hsu's talks [5]; pick any of them, they're all pretty good, and most have a segment in them explaining how that viewpoint is wrong.
[1] https://www.jsoftware.com/papers/perlis77.htm
[2] https://www.youtube.com/watch?v=a9xAKttWgP4
[3] https://www.youtube.com/watch?v=DmT80OseAGs
⬐ lmm> I don't believe this holds up to scrutiny. APL as a language was made to be spoken, and it seems to have succeeded in that. Alan Perlis briefly touched on that in a great paper,[1] and just about everyone who's worked with the language and its derivatives agrees that your statement isn't true.The people who have worked with the language are a small and highly specialised minority of programmers, unusual in many respects. I haven't worked with APL, but I have worked with symbol-heavy Haskell-style code, and I've found that even if these operators theoretically had standardised, pronounceable names, in practice being able to talk about them was a genuine problem. If there's some special difference between APL and symbolic Haskell libraries, what is that difference?
⬐ kickAlan Perlis is a Turing Award-winner who helped to standardize the most influential language of all time, ALGOL, of which Go, Python, C and most languages originate. In many respects, he was closer to a Go programmer than a Haskell programmer.Haskell's symbols aren't notation, they're just scribbles. Miranda is a better comparison, but Miranda too doesn't lean hard enough.
Ken Iverson literally developed APL as a way to give better lectures at Harvard. It was a verbal and handwritten language long before it was typed. It excels at it, because it was designed for it.
⬐ triskaVery interesting!A textual version is available from:
https://dfns.dyalog.com/n_sudoku.htm
The solution uses direct functions that are provided by several APL implementations:
For a mind-melting good time, check out this sudoku solver written in APL: https://www.youtube.com/watch?v=DmT80OseAGs
That 1975 APL demonstration is lovely. Much has been preserved in Dyalog and GNU APL - it's very interesting to see how timeless the core APL functions are.I'm very fond of a number of videos by the late John Scholes narrating APL solutions step by step, including the well-known demonstration of Conway's Game of Life in APL (0). His videos on depth-first search (1) and solving sudoku (2) are also brilliant.
(0) https://www.youtube.com/watch?v=a9xAKttWgP4 (1) https://www.youtube.com/watch?v=DsZdfnlh_d0 (2) https://www.youtube.com/watch?v=DmT80OseAGs
APL is great.If you haven't seen them, the late John Scholes did a number of absolutely beautiful demonstrations programming in APL[1], [2], [3].
[1]: https://www.youtube.com/watch?v=DsZdfnlh_d0
⬐ jodrellblankFor anyone looking to get into APL, Adám at Dyalog posts a lot of answers on the CodeGolf and Programming Puzzles StackExchange site often with expanded/explained code, and he hangs in the "APL Orchard" chatroom on SO. His profile says "always excited to [talk about] APL" and that seems to be genuine. - https://codegolf.meta.stackexchange.com/users/43319/ad%C3%A1...The most public APL dialect using the classic symbols (not J/K/Q/etc) seems to be Dyalog APL - they offer one of their older versions 14 free for non-commercial use (Windows), and they offer their current version 17 free for non-commercial use if you give them your full details (maybe Linux as well?). There is a book "Mastering Dyalog APL" by Bernard Legrand from ~2008 which is available as a PDF with companion files from: https://www.dyalog.com/mastering-dyalog-apl.htm although they have added to the language since that book was released.
There are other APL interpreters - GNU APL, NARS2000, ngn/apl, dzaima/apl (Java/Android) in various states of development and license conditions, and the developers of the last two are also sometimes in the APL Orchard chatroom on StackExchange.
Online there's https://tryapl.org/ run by Dyalog, and https://tio.run/# which has three APL interpreters.
The classic 1970s book "APL\360 an Interactive Approach" by Gilman and Rose is online as a PDF here: http://www.softwarepreservation.org/projects/apl/Books/ with other books as well. That's before the {} syntax for anonymous functions was added to the language, and other developments.
Videos:
Jay Foad while he was at Dyalog, talking through some Advent of Code puzzles with APL: https://dyalog.tv/Webinar/?v=Q_vgSN6rza0 (might want to skip the introduction of explaining the Advent of Code puzzle format).
Aaron Hsu with Dhaval Dalal and Morten Kromberg, talking through some problems using APL https://www.youtube.com/watch?v=Gsj_7tFtODk
And anything that you get from YouTube searching for Aaron Hsu, he has several talks / presentations from high level ideas to working with trees represented using arrays.
These are generally pretty popular:
I've noticed that solving a sudoku is something of a rosetta stone for how people program. Norvig's solution is very straightforward in hindsight, but I think that code was either the result of many iterations or of a career of making progressively more readable code.Here are some other sudoku solvers that give more insight into the coder than the problem:
Sudoku in APL: https://www.youtube.com/watch?v=DmT80OseAGs
Test driven sudoku: http://ronjeffries.com/xprog/articles/sudokumusings/
⬐ amouatA lot was discussed about Norvig's elegant solution compared to Ron Jeffries effort and what it said about TDD: http://ravimohan.blogspot.co.uk/2007/04/learning-from-sudoku...To be fair, the reason Norvig was so successful was he knew exactly the sort of problem soduku was and how to solve it, whereas Ron lacked this background knowledge.
⬐ voyou⬐ nicholas73The odd thing is that, on the evidence of those blog posts, Jeffries, doesn't appear to understand TDD. If you're using TDD to write a sudoku solver, surely the first thing you should write is a test for your "solve_sudoku" function; then your design is supposed to evolve out of the code you write to pass that test (and the further tests you write for parts of that solution, iteratively). But Jeffries starts writing a bunch of tests for a the low-level details of the representation of a sudoku board. There isn't really any "design" in those posts at all; just a guess at something you might use in a solution to the problem.⬐ romanivI think you're missing the point. TDD forces you to choose data representations that best fit your tests. But in most cases of complex programming you want to choose representations that reflect your current understanding of the problem and then evolve them as your understanding evolves.In TDD your test will always predate your understanding. So it forces you to solve (part of) the problem in your head first, rather than combine the act of problem-solving and writing. And you do this over and over again. And when you change your mind about something, you have twice the code to refactor.
It's a difference between recording your thinking in code and actually thinking in code.
This reminds me of an old article that exemplifies this: http://insideofthebox.tumblr.com/post/52002125683/prime-fact...
Prime factor kata is often used as an introductory example of TDD, yet it is clearly impractical and produces garbage code. Not to mention that I've actually seen people do it, stumble, and hectically reach for their notes to see what's the next step. And it's a pretty darn simple problem.
⬐ amouatErr, I'm missing the point of my own point? You raise an interesting point, but I don't think TDD was what stumped Ron. It's not a simple problem to solve without a lot of thought unless you have the requisite background knowledge like constraint problems. TDD did however have him going round in circles, partly for the reasons you describe.This article was my crash course for computer science (I come from another engineering field). I learned so much by studying this code, and understanding it allowed me to write a sudoku generator. The generator algorithm is much different than solving, but the basic structure was already there for me.In any case, I'm replying to your comment because it applies to me. Norvig's code is compact and elegant, whereas mine is spaghetti code at best. I basically have lots of control statements, as there are many many processing steps that I painstakingly discovered as I went along (thousands of lines). Heck, I can barely remember them myself.
Here's my site: http://sudokuisland.com
I would rather see an APL video https://www.youtube.com/watch?v=DmT80OseAGs