HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Math's Fundamental Flaw

Veritasium · Youtube · 77 HN points · 17 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Veritasium's video "Math's Fundamental Flaw".
Youtube Summary
Not everything that is true can be proven. This discovery transformed infinity, changed the course of a world war and led to the modern computer. This video is sponsored by Brilliant. The first 200 people to sign up via https://brilliant.org/veritasium get 20% off a yearly subscription.

Special thanks to Prof. Asaf Karagila for consultation on set theory and specific rewrites, to Prof. Alex Kontorovich for reviews of earlier drafts, Prof. Toby ‘Qubit’ Cubitt for the help with the spectral gap, to Henry Reich for the helpful feedback and comments on the video.

▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
References:

Dunham, W. (2013, July). A Note on the Origin of the Twin Prime Conjecture. In Notices of the International Congress of Chinese Mathematicians (Vol. 1, No. 1, pp. 63-65). International Press of Boston. — https://ve42.co/Dunham2013

Conway, J. (1970). The game of life. Scientific American, 223(4), 4. — https://ve42.co/Conway1970

Churchill, A., Biderman, S., Herrick, A. (2019). Magic: The Gathering is Turing Complete. ArXiv. — https://ve42.co/Churchill2019

Gaifman, H. (2006). Naming and Diagonalization, from Cantor to Godel to Kleene. Logic Journal of the IGPL, 14(5), 709-728. — https://ve42.co/Gaifman2006

Lénárt, I. (2010). Gauss, Bolyai, Lobachevsky–in General Education?(Hyperbolic Geometry as Part of the Mathematics Curriculum). In Proceedings of Bridges 2010: Mathematics, Music, Art, Architecture, Culture (pp. 223-230). Tessellations Publishing. — https://ve42.co/Lnrt2010

Attribution of Poincare’s quote, The Mathematical Intelligencer, vol. 13, no. 1, Winter 1991. — https://ve42.co/Poincare

Irvine, A. D., & Deutsch, H. (1995). Russell’s paradox. — https://ve42.co/Irvine1995

Gödel, K. (1992). On formally undecidable propositions of Principia Mathematica and related systems. Courier Corporation. — https://ve42.co/Godel1931

Russell, B., & Whitehead, A. (1973). Principia Mathematica [PM], vol I, 1910, vol. II, 1912, vol III, 1913, vol. I, 1925, vol II & III, 1927, Paperback Edition to* 56. Cambridge UP. — https://ve42.co/Russel1910

Gödel, K. (1986). Kurt Gödel: Collected Works: Volume I: Publications 1929-1936 (Vol. 1). Oxford University Press, USA. — https://ve42.co/Godel1986

Cubitt, T. S., Perez-Garcia, D., & Wolf, M. M. (2015). Undecidability of the spectral gap. Nature, 528(7581), 207-211. — https://ve42.co/Cubitt2015

▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Special thanks to Patreon supporters: Paul Peijzel, Crated Comments, Anna, Mac Malkawi, Michael Schneider, Oleksii Leonov, Jim Osmun, Tyson McDowell, Ludovic Robillard, Jim buckmaster, fanime96, Juan Benet, Ruslan Khroma, Robert Blum, Richard Sundvall, Lee Redden, Vincent, Marinus Kuivenhoven, Alfred Wallace, Arjun Chakroborty, Joar Wandborg, Clayton Greenwell, Pindex, Michael Krugman, Cy 'kkm' K'Nelson, Sam Lutfi, Ron Neal

▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Executive Producer: Derek Muller
Writers: Adam Becker, Jonny Hyman, Derek Muller
Animators: Fabio Albertelli, Jakub Misiek, Ivy Tello, Jonny Hyman
SFX & Music: Jonny Hyman
Camerapeople: Derek Muller, Raquel Nuno
Editors: Derek Muller
Producers: Petr Lebedev, Emily Zhang
Additional video supplied by Getty Images
Thumbnail by Geoff Barrett

▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
I know nothing of OP, or the professor he talked to, but I think they might be talking about the higher level concepts of math.

For example, set theory. If set theory allowed self reference you could have set R, a set of sets that do not contain themselves. Would R contain R? It can’t, but it can’t not either. It’s self contradictory. The solution, reached through consensus, was to restrict the definition of the set to not allow sets to reference themselves [1].

[1] https://youtube.com/watch?v=HeQX2HjkcNo

graycat
The way I learned how to resolve the Russell paradox: Over here on the left in a pile we have the elements we will work with. Now, for the sets, they are made of the elements and are in a separate pile, are over here on the right. Sooo, with this little preliminary step, there is no way to consider the set of all sets that are not elements of themselves.

As I recall, there was a paper by Robert Tarjan where he observed that most paradoxes are from self referencing. Soooo, rule out self referencing and will rule out most paradoxes!

Veritasium: Math's Fundamental Flaw

https://www.youtube.com/watch?v=HeQX2HjkcNo

Veritasium - Math's Fundamental Flaw https://youtu.be/HeQX2HjkcNo

At 13:46 it gets into the "is there a way to prove every complete statement?" and furthermore goes into the Godel math for "there is no proof for the statement with Gödel number g" where that statement itself has a number "g".

The entire video is a good watch though.

And the related problem is that with Godel we don't know if math is consistent either.

Going through a shitty phase (which happens often) I've been trying to cope in various ways, reaching high and low, inside and outside, for meaning and purpose and guidance.

I don't subscribe to most existing models, not out of a rebellious choice but because I cannot find or follow their logic to a satisfactory conclusion.

So far this is what I have, make of it what you will:

• There must be a default state, and a part of it must reside in everything built upon it. It doesn't make sense that something began from Nothing, but I like to think that something began from Everything.

• But it doesn't mean that a superset system can control a subset system 100% – Think of our relationship with computers and the programs running on/in them, along with all the physics involved and the virtual results.

• Veritasium's video on "Math's Fundamental Flaw" [0] was a good source of inspiration: Shit may be coded with hard immutable logic but still have virtually infinite possibilities.

• If you think your mind, or your "will" or "heart" or "soul", matters in any way and can make any difference, try to find the cutoff point between -where- that metaphysical quality starts, what sets "you" apart from everything else: What's the difference between you and every other person's mind/soul? Or a person's mind and a computer program? The difference between a world with no humans in it, and a world with 1 human? A barren asteroid and a planet with life? Can a rock access the mysteries of the Universe? Can you?

TL;DR: What you see right now may be what you will ever get, everything may just be circumstance, but there -just might- be a way to touch something greater.

[0]: https://www.youtube.com/watch?v=HeQX2HjkcNo&t=1772s

And yet it is provable that there are things we know we can’t know

https://youtu.be/HeQX2HjkcNo

I love Veritasium. Derek has done a number of videos that are awesome like this. The videos are great and educational, but he also manages to subtly weave in an almost spiritual subtext to them as well. For example, in this one, you learn about imaginary numbers in some cool ways, history, how awesome e^ix is, etc, but also this yen-yang balance between embracing not only what is real but taking a chance on what is imaginary.

Some of my other favorites:

- https://m.youtube.com/watch?v=rhgwIhB58PA

- https://m.youtube.com/watch?v=HeQX2HjkcNo

- https://m.youtube.com/watch?v=OxGsU8oIWjY

- https://m.youtube.com/watch?v=3LopI4YeC4I

- https://m.youtube.com/watch?v=pTn6Ewhb27k

I think he’s going to lose the $10,000 bet on the car though.

kzrdude
I didn't click through all (mobile, too slow) so don't know if it's in there, but one favourite i have is the one where he has an ice cube, a metal block and a plastic block and ask people on the street which block is colder.

It teaches a lot about what temperature vs feels cold vs heat is.

agumonkey
That's my favorite video of him. This one ties history, education, simple geometry, advanced abstractions in an entertaining script. Pretty high bar.
jasonwatkinspdx
> I think he’s going to lose the $10,000 bet on the car though.

Are you referring to the wind powered propeller car bet or is there a new one?

travisgriggs
Yes, the wind powered deal, but it appears it's actually been conceded by the physics guy. I guessed wrong. Glad I didn't bet. :D
jasonwatkinspdx
Yeah, it's a fascinating question and I can see why it trips people up. Anyone that's done sailing/windsurfing will be familiar with the phenomena of apparent wind, and that's roughly what's going on here too.
lxe
Some of popular educational/informational channels and podcasts make this patronizing assumption about the audience, watering everything down to make it "accessible". Veritasium doesn't do this, yet maintains huge popularity.
But what if the statement is "this statement can't be proven"? You haven't proven the statement, you've proven that it can't be proven.

See https://youtu.be/HeQX2HjkcNo?t=926 for instance which is (IMO) a fairly good explanation of the proof and why you end up with something both true and not provable.

Penrose's "Emperor's New Mind" also discusses this topic at length (within the context of whether human brains can be emulated). I wasn't convinced with his argument regarding AI but his overview of the problem of what's computable and what's not is rather good if a bit dense.

Filligree
> Penrose's "Emperor's New Mind" also discusses this topic at length (within the context of whether human brains can be emulated). I wasn't convinced with his argument regarding AI but his overview of the problem of what's computable and what's not is rather good if a bit dense.

As far as I'm aware physics is computable, so I don't buy that one. :)

At most one might argue that there are high-level optimizations to be had, which can't be used because they're uncomputable. In which case is it really an optimization?

simias
That's part of his arguments, that some corners of quantum physics that still elude us (quantum gravity and all that) may be undecidable. He then goes on to explain how he thinks these phenomena could have an influence on cognition.

I don't really buy this theory, but the book is basically 80% "here's what we know" and 20% "here's what I conjecture" so it's still a worthwhile read IMO. It's worth noting that it's a book from the late 80's however, so it's a bit outdated in some parts (he didn't know that the universe's expansion was accelerating in particular, which renders his discussion of the curvature of the universe a bit obsolete).

I found this nice presentation about Godel's theorems. https://youtu.be/HeQX2HjkcNo?t=895

"There is no proof for the statement with Godel-number g".

What I'm struggling with is trying to understand if self-reference can be formally represented by some symbol. Can it?

Godel's proof seems to rely on a trick that allows a formal system to talk about itself but is that possible? There would need to be a symbol for that and thus a symbol that refers to itself?

A picture of a pipe is not a pipe. Similarly a representation of a proof is not a proof, is it?. Not saying I believe I'm right and Godel wrong, but it just feel strange to me that a formal system could ever "prove" anything about itself.

What does it mean that something is "true"? That it can be proven?

zuminator
Not every picture of a pipe is a pipe but it's at least conceivable that an actual working pipe can be constructed out of something which happens to be a picture of a pipe. Similarly Godel isn't proving that all statements in a system are paradoxically self-referential. Only that given any system of sufficient complexity, one can conceivably be constructed.

In any event, all of our mathematical proofs are predicated upon the idea that the representation of a mathematical object can be treated in terms of proof as equivalent to the mathematical object. Is "2" the number 2 or is it just a glyph that non-representationally depicts the successor of the first counting number? And if you believe the latter, then how do you go about proving anything about the actual successor of the first counting number without resorting to glyphs?

galaxyLogic
> given any system of sufficient complexity,

That's my question. It seems that adding "self reference" to the system makes it more complicated than it would be otherwise. So it's not just a system with enough complexity to let you do arithmetic. You need a system with enough complexity to also do self-reference. And I would say and think that self-reference is complicated.

Do we need self-reference in any other areas of mathematics, than Godel's and related proofs?

Aug 09, 2021 · gary17the on The 3x+1 Problem [video]
This was a great video. But the one titled "Math Has a Fatal Flaw"[1] was even more interesting: the meta-mathematics, thus mathematics of entire mathematics (Gödel).

[1] https://www.youtube.com/watch?v=HeQX2HjkcNo

feoren
I can't get over the clickbaity, obviously incorrect title to that one. I refuse to watch it for that reason alone. If the flaw is fatal, then every mathematician is wasting their time.
kadoban
If it's clickbait, it's just barely. It's not unreasonable in my opinion to call it a fatal flaw, it conclusively puts to death several things that one would wish were possible in math and computer science.

It's also just a really good video, with clear and understandable explanations of difficult to communicate concepts.

wodenokoto
It’s about the gödel incompleteness theorem. I think it’s fair to say that’s as big a flaw in math as flaw gets.

It says there are things in math that are true but unprovable, and we can’t know if it’s unprovable.

rcxdude
well, the theorem basically did show that a huge number of mathematicians were wasting their time on an impossible task.
pgalvin
The video is very interesting, I would highly recommend you still watch it.

Derek has has actually made a video before about the YouTube algorithm and has polled the community several times about clickbait-ey titles. He doesn’t like them, but they’re unfortunately necessary.

Sort of a side track, but there’s a great layman’s intro to this in this Veratasium video https://m.youtube.com/watch?v=HeQX2HjkcNo
If you are mathematically-inclined and are looking forward to understanding the proof itself, the book won't do much better than the following video [1] taken from a comment in this thread [2]. Disappointed, I attempted to read Godel's original papers. That has it's own challenge as it refers to Principia Mathematica. I started reading that. Only to fimd that it uses older mathematical notation not in use today.

In short, I haven't found any source that explains the proof in reasonable detail.

If you want to understand the thought process of axiomatic reasoning (and with less extrapolations to consciousness et al) it is a good book.

Note: I had read a long time back, so do not remember the details.

[1] https://www.youtube.com/watch?v=HeQX2HjkcNo

[2] https://news.ycombinator.com/user?id=littlecranky67

See Veritasium's video, he goes into depth about Godel's theorems:

https://www.youtube.com/watch?v=HeQX2HjkcNo

ProfHewitt
Veritasium missed that existence of the [Gödel 1931]

proposition I'mUnprovable leads to inconsistency in

foundations by the following simple proof:

     Ever since Euclid, it has been a fundamental principle 
     that a theorem can be used in a proof (the principle of  
     TheoremUse), that is, {⊢((⊢Ψ)⇒Ψ)} [cf. Artemov and 
     Fitting 2019]. However, by [Gödel 1931], 
     ⊢(¬I’mUnprovable⇔⊢I’mUnprovable). Consequently, 
     ⊢(¬I’mUnprovable⇒I’mUnprovable) by TheoremUse. 

     • Therefore ⊢I’mUnprovable  using 
       ProofBySelfContradiction {⊢((⊢(¬Ψ⊢Ψ)) ⇒ ⊢Ψ)} with 
       Ψ being I’mUnprovable. 

     • Thus, I’mUnprovable using TheoremUse {⊢((⊢Ψ)⇒Ψ)} 
       with Ψ being I’mUnprovable. Consequently, 
       ⊬I’mUnprovable using ⊢(I’mUnprovable⇔⊬I’mUnprovable)
        
     Having both ⊢I’mUnprovable and ⊬I’mUnprovable is a 
     contradiction in foundations.
sillysaurusx
Dear Professor Hewitt,

I’m sorry for my comment at https://news.ycombinator.com/item?id=27537500. In hindsight, I realize now that it sounded like I was making fun of you. I didn’t intend to do that at all. I think your concerns about Wikipedia were surprising, and I only meant to express surprise.

Thank you for taking the time to patiently explain to everyone your ideas about improbability. It takes a lot of courage to present an argument the way you’ve been doing in these threads. And you present your proofs in mathematical form here, which is rare, and worthy.

(I’m posting this here simply because it’s your most recent comment, so that you’re likely to see it. I wasn’t quite sure where to put it.)

Have a wonderful week.

YeGoblynQueenne
Well done, dude.
ProfHewitt
Thank you very much sillysaurusx!
sillysaurusx
(I meant “Your ideas about provability,” not “Improbability.” My phone changed it, so it probably sounded like nonsense. Point is, it seems to me that your ideas about the incompleteness theorem might be true, and I am surprised and impressed that you’re in here explaining to everyone mathematically why they might be true. I don’t have the expertise to judge, but I recognize excellence when I see it. And your explanations are clearly excellent, because if they were mistaken, someone could check your math and point out the mistake. But no one has done that, so it seems to me you might be correct, even though everyone seems to think it’s guaranteed you’re wrong.)

If there’s somewhere I can follow your work in this area, I would be interested to see how it turns out. For example, if you’ve written a paper on this subject, or if there’s some place online where you post updates related to this.

Either way, good luck!

ProfHewitt
Thanks for your perceptive remarks!

The following article has more information:

https://papers.ssrn.com/abstract=3603021

Also, there are further articles linked here:

https://professorhewitt.blogspot.com/

If interested, you can follow @ProfCarlHewitt on Twitter.

For the really dedicated, there are few tickets left for sale

at the upcoming "Theory and Practice of Actors" workshop at

StrangeLoop 2021:

https://www.thestrangeloop.com/2021/theory-and-practice-of-a...

Veritasium recently released a very good video that explains Gödels incompletenes theorem such that even I could understand it: https://www.youtube.com/watch?v=HeQX2HjkcNo
rottc0dd
There is a little great book called "Godel's proof" by Nagel and Newman[1]. And definitely requires some focus but readable nevertheless.

[1] https://www.amazon.com/G%C3%B6dels-Proof-Ernest-Nagel/dp/081...

Edit : removed a line

ruste
I second this! I spent a few days of summer vacation in highschool slowly working my way through this and it was mind expanding.
alok-g
A good book as such. However, whereas the authors claim that they outline the proof in chapter 6, that chapter fell short of it with key aspects of the proof missing. Overall, I was disappointed.
rottc0dd
There is a great book called "Godel's proof" by Nagel and Newman[1]. And definitely requires some focus but readable nevertheless.

[1] https://www.amazon.com/G%C3%B6dels-Proof-Ernest-Nagel/dp/081...

Edit : removed a line

chriswarbo
I knew Gödel's (first) incompleteness theorem pretty well already, but I really enjoyed how much that video managed to get across without getting too bogged down.

One thing I'd like to see in a companion video would be an explanation of why Turing machines represent computation. That video (like many others) skims over the why, and only talks about what they can/can't do after we've already decided to use them.

Turing's 1936 "On Computable Numbers" paper gives a nice philosophical justification for his model, which (from my reading) boils down to the following:

- Mathematicians can do their work within a finite volume of space (their brain, or a room, or the whole Earth if we want to be conservative). Also, they could (in principle) do their work using only written communication, on standard pieces of paper, each of which also has a finite volume.

- Finite volumes can only have finitely-many distinguishable states (having infinitely many states would require them to be infinitesimally similar, and hence indistinguishable from each other within a finite amount of time)

- Hence we can label every distinguishable state of a brain with a number; and likewise for every distinguishable piece of paper (at least in principle)

- Since these are both finite, any mathematician's behaviour could (in principle) be completely described by a table detailing how one state (brain + paper) leads to another

- Given such a table, the actual content of the states (e.g. the wavefunctions of particular protons, the electrical potential of particular synapses, the placement of ink molecules, etc.) is irrelevant for the behaviour; only the transitions from one numbered state to another matter

- Hence we could (in principle) build a machine with the same number of states as one of these tables, and the same transitions between the states, and it would exactly reproduce the behaviour of the mathematician

This is the philosophical justification for why a (Turing) machine can calculate anything a human can (in fact, the same argument shows that a Turing machine can reproduce the behaviour of any physical system).

However, this is still a rather hand-wavey "in principle" thought experiment about unimaginably huge numbers. Turing managed to take it further.

For simplicity we assume all the papers are arranged in a sequential "tape", we'll call the distinguishable states of the papers "symbols" and those of the mathematician/machine "states":

- One thing a mathematician can do is read a tape with one of these tables written on it, followed by a sequence of numbers representing the symbols of another tape, and emulate what the described machine would do when given the described tape (i.e. they could keep track of the current state and tape position, and look up the transitions in the table, to see what would happen)

- Since a mathematician can emulate any given table (in principle), so can a machine. This would be a "universal machine", able to emulate any other. (The video talks about such a machine, in the proof that the halting problem is undecidable)

So the question becomes: how unfathomably complicated would such a universal machine have to be?

- These transition tables and tapes can be very big, and may contain very large numbers, but we can write them down using only a small alphabet of symbols, e.g. "start table", "new row", "the digit 7", etc.

- Reading a sequence of such symbols, and emulating the described machine, can get tricky. Turing described a universal machine "U", but he did so in a rather indirect way, which also turned out to have some mistakes and glaring inefficiencies. Davies later worked through these and ended up with an explicit machine using only 147 states and 295 symbols.

Hence we can use a machine with only a few hundred states to exactly reproduce the behaviour of any mathematician (or any physical system), as long as it's given an appropriate description (i.e. "software"). Later work has found universal Turing machines with only 4 states and 6 symbols.

One reason Turing's justification for his model is important, rather than just proposing the model and seeing what happens (like in the video), is that Alonzo Church had already proposed a model of computation (called Lambda Calculus), but didn't have such a justification.

Gödel himself dismissed Lambda Calculus, proposing his own system (General Recursive Functions) as a better alternative. When Church proved they were equivalent, Gödel took that as reason to dismiss his own system too! Yet Turing's argument did convince Gödel that a fundamental limit on computation had been found. Turing proved his machines are also equivalent to Lambda Calculus, and hence General Recursive Functions; so all of these proposed models turned out to be 'correct', but it was only Turing who could explain why.

Personally I consider this reduction of physical behaviour to transition tables and then to machines, to be the main reason to care about Turing machines. Proving the undecidability of the halting problem was also a great achievement of Turing's 1936 paper (as shown in that video), but that can also be explained (I would argue more easily) using other systems like Lambda Calculus.

Without Turing's justification of his model, undecidability comes across as simply a flaw. Sure Turing machines may be (distantly) related to our laptops and phones, but if we found a better model without this 'undecidability bug' we could make better laptops and phones! Turing's argument shows that there is no better model (just equivalents, like Lambda Calculus).

meroes
Where in there did Turing prove computation can "exactly reproduce the behavior to...(any physical system)"? Because we humans seem to be able to describe and write down the behavior of any physical system? I'm not sure that jump is airtight.

I'm not sure we can say this part even though we might believe it to be true. I mean maybe now we can since we have no deeper theories than quantum field theories, and we can posit all of physical reality reduces to QFTs and GR and simulate all of QFT computationally either with quantum computers (in real time) or classical (not in real time + need a random number generator). But the we'd still have computational limits by which observables we can measure and how big of computations we can run (think size of computer limited by black hole density for one crude example - although I guess a blackhole could be used as a quantum computer. So then I'd say the speed of light is the other main limit, we've already lost the information to run certain calculations to the depths of space).

And to focus back in on the random number generator requirement for classical to simulate quantum...nothing fundamentally random can be simulated classically in principle. Yet randomness is needed to describe the most basic layer of reality. I can't get true randomness from classical no matter what.

chriswarbo
> Where in there did Turing prove computation can "exactly reproduce the behavior to...(any physical system)"? Because we humans seem to be able to describe and write down the behavior of any physical system?

Turing's scenario essentially applies to a person in a room, with an unlimited amount of paper (this could be a movable tape extending through the walls, to prevent the room filling up with paper!).

However, the same argument works if we replace the person with any other physical system (e.g. some mechanical device, or some physics experiment), and replace the room with any finite region of space (e.g. the Milky Way galaxy).

I don't remember Turing himself making this argument (let alone in his 1936 paper), but it's known as the "physical Church-Turing thesis" https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis#V... (the Church-Turing thesis is the argument that all 'effective methods of calculation' can be carried out by a Turing machine)

As for your points about quantum physics, etc. that's certainly true, and it's why the Church-Turing thesis isn't a theorem (and can't be, since we can never know if any particular model of reality is correct). However, it's certainly a valid physical law, akin to nothing moving faster than light, etc. i.e. it's empirically true, theoretically sound, and we have no better explanation for the moment.

Perhaps its origin in mathematics, where it is a second-class citizen compared to provable theorems, prevented the Church-Turing thesis getting the recognition it deserves, e.g from physicists.

foxes
Well I might add the quantum complexity-theory version of the Church–Turing thesis.

I believe that quantum computation is a richer more powerful theory. In essence BQP is a superset of BPP in your link.

I think this happens with all sorts of mathematical objects. Non commutative algebra is much richer than commutative algebra. There are also more interesting non commutative things than just deformations of classical objects. Its the classical commutative case that is the weird limit.

chriswarbo
Yeah, there are all sorts of layers we can add, especially regarding complexity (i.e. resource usage limits, rather than simple yes/no questions of decidability).

It's interesting to consider P = NP as a physical law, although that's certainly more tenuous than the Church-Turing thesis, etc.

lurquer
Interesting post.

Can you please flesh out the following? I don’t quite follow it..

> having infinitely many states would require them to be infinitesimally similar, and hence indistinguishable from each other within a finite amount of time

chriswarbo
Sure. Let's take a finite region, like a room. We don't want to get too bogged down in the details of human psychology, anatomy, etc. so we'll stick to something very low-level like particle physics (which, in principle, can describe a person in exact detail). One state of this region is that it's completely empty (maybe quantum effects rule that out, but we can still include it if we're being conservative ;) ). Another state could have, say, a single electron, at some position we'll call (0, 0, 0). Another could have a single electron at position (0, 0, 1). Another has an electron at (25, 5, 3) and a proton at position (0, 0, 0). And so on.

Let's consider the states which just contain a single electron, somewhere in the room (I'm not bothering with quantum effects, but we would do the same thing just in Hilbert space instead of 3D space). The region is finite, so the coordinates of this electron are bounded: if the boundaries are at, say, 0 and 100 in each axis, then putting the electron at position (101, 0, 0) would be the same state as the empty one we started with (since the electron is outside the region we care about, and there's nothing else).

Now let's say we do some measurement of this electron's position, which is only accurate to whole number coordinates. That gives us 100^3 = 1,000,000 distinguishable states with a single electron. We might imagine all sorts of different states 'in between', but they can't affect that measurement due to its limited resolution.

If we increase the resolution of our measurement by 10, we get 10^3 times as many distinguishable states; another factor of 10 gives 10^3 times as many states again; and so on. However, regardless of how finely we can resolve the electron's position, we can only distinguish between finitely-many states. Any differences finer than that limit are irrelevant to the measurement, and hence to any behaviour that depends on that measurement.

If we could resolve between infinitely-many positions for the electron, that would require distinguishing between states which are infinitesimally-close together, i.e. on an infinitely-fine grid; this would require distinguishing between two numbers whose decimals only differ after infinitely many digits. This doesn't seem like a reasonable capability.

The same principle applies when we have two electrons in the room, or trillions, or any combination of electrons, protons, photons, etc. in any arrangement we like.

A similar thing happens when we reach really large numbers too, e.g. trying to put as many particles in the region as we can: at some point the relative difference between, say, a googolplex and five particles versus a googolplex and six particles becomes too small to distinguish. There will eventually be a limit. (This is like the resolution problem, but instead of adding decimal places to the right of each number, we're adding factors of 10 to their left)

One way to get around this problem of limited resolution is to avoid an explicit 'measurement', and instead have the region itself behave differently for different states. A good example is chaotic behaviour, where tiny changes in an initial state will grow exponentially until those differences eventually become distinguishable. However, the infinitesimally-similar states described above will remain infinitesimally close for all finite lengths of time; in order to distinguish between them, we would need to wait an infinite amount of time.

lurquer
Ah. I hadn’t thought of that.

So putting aside Plank and all the experimental stuff, even on a philosophical level there are problems with thinking of space as continuous. For, that with which we measure space is necessarily discrete and the time-frames in which we measure are necessarily finite. So, even if space was continuous, two ‘states’ that are infinitesimally similar could for ALL practical purposes be considered identical.

Is that it?

chriswarbo
Pretty much.

As a far less formal analogy, it's similar to how people complain that digital audio in general has lower quality than analogue, just from a discrete vs continuous standpoint. When in fact there's no limit to the quality of a digital representation (if we liked, we could store a gigabyte per sample, or a petabyte, or whatever). Yet we can always draw the line somewhere, beyond which there are no meaningful distinctions.

0x0203
I know I'm pretty inept when it comes to this subject, but can someone help me understand how the first three points of Turing's philosophical justification listed above can be true? For example, in computing the sequence of n values of an infinite series or the first n digits of pi, how can the claim be made that this can be done in a finite amount of space or with a finite number of states?
chriswarbo
We're allowed an unlimited amount of paper, but each piece has a finite size, and hence each piece is in one of only finitely-many distinguishable states. Turing called these piece-of-paper-states "symbols", and arranged the pieces of paper in a "tape" of unlimited length (a stack of sheets, or a library of books would also work; but moving between the pieces would be more complicated).

In fact, we only ever need 2 symbols, e.g. 0 and 1. This is because any set of symbols can be replaced by a set of numbers, and those numbers can be written in binary (with a fixed width, to avoid needing 'spacers'). This can require a lot more machine states, since those states allow the machine to 'remember' the bits it's read so far (e.g. if we're using 8 bits at a time, then we need enough states to "remember" what the first 7 bits were as we're reading the 8th).

For a calculation like pi, we can use the tape to (a) write down the digits (or bits) of pi, and (b) keep track of intermediate variables needed for the calculation. One simple approach to keeping track of intermediate variables is to store them in order on the tape, where each variable is a sequence of 1 symbols, separated by 0 symbols. For example a tape like: 111010010 represents four variables with values 3, 1, 0 and 1 (the number of 1s we see before we hit a 0). This actually corresponds to Peano arithmetic, if we use the symbols S and Z instead like SSSZSZZSZ.

An algorithm which spits out digits of pi forever (e.g. http://www.cs.ox.ac.uk/people/jeremy.gibbons/publications/sp... ) could work by moving its 'variables' section further and further to the right, allowing the digits of pi to accumulate, e.g.

    Start
    
    ^

    Initialise variables and output/variables separator
    |VARIABLES

    Move variables to make space for next digit
    _|VARIABLES

    Calculate and write next digit
    3|VARIABLES

    Move variables to make space for next digit
    3_|VARIABLES

    Calculate and write next digit
    31|VARIABLES

    and so on
mensetmanusman
It might be because pi isn’t infinite in practice. E.g. we could use all the available energy in the universe and only calculate up to a finite digit (ignoring the information storage issue!).

Whether the abstract infinite pi ‘exists’ is up for metaphysical debate :)

chriswarbo
Here is pi, written as a number to infinite precision (from http://davidbau.com/archives/2010/03/14/python_pipy_spigot.h... ):

    def pi_decimal_digits():
      q, r, t, j = 1, 180, 60, 2
      while True:
        u, y = 3*(3*j+1)*(3*j+2), (q*(27*j-12)+5*r)//(5*t)
        yield y
        q, r, t, j = 10*q*j*(2*j-1), 10*u*(q*(5*j-2)+r-y*t), t*u, j+1
If you mean that pi can't be written as a number to infinite precision using place-value notation (as opposed to a more useful notation like Python) then I agree that only a finite prefix can ever be observed.
mcguire
Note that the Python version of pi requires an unbounded amount of memory and, more importantly, an infinite amount of time to produce pi to infinite precision.
chriswarbo
> Note that the Python version of pi requires an unbounded amount of memory

Really? I measure it as exactly 200 bytes (not counting the 4-space indentation that HackerNews requires, and not including a trailing newline)

> and, more importantly, an infinite amount of time to produce pi to infinite precision

I can see it right now, with infinite precision. If it's taking an infinite amount of time to show up I'd contact your ISP ;)

mcguire
Ah, I see where you are going with this. However, I can offer a version which, without loss of generality, gives another 200-to-1 compression (not counting UTF-8 multibytes): π.

There it is, in all it's infinite splendor.

Kranar
Not at all, the reason you may think so is that you're very used to representing numbers using a base 10 number system and in that system it is inherently impossible to represent pi. But if you change how you represent numbers, you can represent pi exactly. What OP did was represent pi using a syntax inspired by a programming language, which is just as valid a number system as any other formal representation as far as correctness goes, although of course as a human I likely would not wish to express most of my use cases involving numbers using a full blown programming language.

That said, one can perform all of arithmetic using that system and represent any property of numbers that could be represented otherwise.

mensetmanusman
Also, pi = C/d

in geometry notation :)

lmm
What you've written is not pi, it's a procedure for computing pi. Python expressions are not a notation, they fail basic requirements for a reasonable notation (for example you cannot generally determine whether two Python expressions are equal or not).
mietek
You may find it interesting to learn that one cannot generally determine whether two real numbers are equal or not: equality on real numbers is undecidable.
lmm
I'm well aware. It's also the case that there is no good notation for the real numbers; in particular since any given notation system can only represent countably many distinct entities, almost all real numbers will be are unrepresentable in that notation system. Therefore the notion that the full set of reals actually exists is extremely dubious.
mcguire
Turing's machine includes an "infinite" tape. The number of states, the state transitions, and the number of symbols are finite, but an unbounded number of symbols can be written, one to a cell, on the tape.
mensetmanusman
I’ve wondered recently whether the brain can be represented by a Turing machine if our computation mechanisms utilize something like back propagation through time via quantum effects.

Maybe we will make progress in our understanding this millennia :)

chriswarbo
Quantum physics can be represented and calculated by Turing machines. There are many ways to do this, depending on preference: via symbolic algebra, via probabilistic inference, via random/pseudorandom numbers, via non-deterministic functions, etc.

Note that quantum computers are no more powerful than Turing machines; although it's thought they might be faster at certain tasks. For example, we know how to factorise integers quickly using Shor's algorithm on a quantum computer, but we don't yet know a quick algorithm for factorising integers on a classical computers. We can still factorise integers on a classical computers: we can do it with a non-quantum algorithm, or we can even do it by emulating a quantum computer and using that emulator to run Shor's algorithm (in this case the emulation is so slow that it exactly cancels out the speedup from Shor's algorithm).

Backpropagation through time is also a pretty simple algorithm, which is already implemented on classical computers.

wizzwizz4
> Note that quantum computers are no more powerful than Turing machines;

But computers with self-consistent time travel are more powerful. It's perhaps physically possible to construct one of those, depending on whether we ever end up with a more-correct timeless theory of physics that makes different predictions to existing physics.

> Backpropagation through time is also a pretty simple algorithm, which is already implemented on classical computers.

I read “backprop through time” as meaning that you have the result of the backprop at the beginning of the process – that's more powerful, if we're using continuous computation. (With discrete computation, you're right that it's no more powerful.)

meroes
> Quantum physics can be represented and calculated by Turing machines.

1) Where does the randomness of QM come in though? You need a random number generator alongside your Turing machine.

We coud debate that one can never prove true randomness as it may always be due to some deeper determinism. But if we take the wavefunction of QM as complete, which Many Worlds does - the most compsci-like interpretation - then randomness is fundamental according to theory.

I can have all the TM's I want, but I still need this extra component. Just like the human Turing machine could have all the pen and paper and math in the entire universe and beyond, a human can't produce a truly random number (or can we, then we get into a debate about true free will and whether free will can produce a "random" number).

2) Let's say we instead use quantum computation instead of classical. There are still physical systems we can't "compute", like the state of the entire observable universe, the state of the entire universe, or anything requiring information that has forever left our light cone. I know TM's and quantum versions of TM's are infinite, but they can't bypass the speed of light. So we need some softer version of saying all of physical reality is computable imo.

chriswarbo
> Where does the randomness of QM come in though?

That's why I said it's up to your own preference.

It's interesting that you mention a many-worlds approach, since that doesn't actually involve any randomness. Rather than choosing a single result probabilistically, many-worlds keeps them all. This can be represented quite nicely with what I referred to above as 'nondeterministic functions', i.e. functions which may return multiple values (in some sense, those values are literally the "worlds"). This would look like a logic language a la Prolog, but wouldn't be restricted to depth-first traversal: it could run breadth-first (very slow), concurrently (nice in principle, but would swamp any finite machine with overheads), or iterative deepening (probably the nicest implementation IMHO). Haskell programmers would call this a "breadth-first list monad" (AKA LogicT, AKA Omega). Instead of "returning multiple values", we could also do a continuation-passing transform and invoke the continuation multiple times (concurrently).

This would run exponentially slowly, but that's a whole separate issue ;)

> We coud debate that one can never prove true randomness as it may always be due to some deeper determinism.

We could get a Copenhagen-like approach by doing the same as above, but only returning one of the results, chosen "at random" according to the square of the wavefunction (plus some hand-wavey Copenhagen-craziness, like defining whether or not Wigner's Friend is an 'observer')

When it comes to "randomness" my own approach is to mentally substitute the word "unpredictable", since it tends to be more enlightening. Turing machines can certainly produce results which are "unpredictable", in the sense that the only way to know what will happen is to run the program; in which case that's more like a (repeatable) observation rather than a prediction. (BTW I also think that's a nice resolution of the "free will" paradox: my behaviour may be predicted by observing what an exact copy of me does; but it was still "decided by me", since that copy essentially is "me")

In any case, the outcome of a Turing machine using some pseudorandom algorithm is indistinguishable from "quantum randomness". There is certainly a limit to the unpredictability of any pseudorandom events (given by its Kolmogorov Complexity, which is upper-bounded by the Turing machine's program), but on the other hand I think it's quite presumptuous to think "quantum randomness" has infinite Kolmogorov Complexity.

> There are still physical systems we can't "compute", like the state of the entire observable universe, the state of the entire universe, or anything requiring information that has forever left our light cone.

That's a very different question, since it involves how we choose what to put on the tape, rather than what a Turing machine is capable of computing. It's like criticising a genie for being unable to grant a wish, when in fact it's we who can't think of what to ask for.

Besides this, there is a way for a Turing machine to compute the whole universe, including what's outside our visible lightcone: we simply run every program. That's actually a pretty straightforward task (and remarkably efficient), e.g. see the FAST algorithm in section 9 of https://people.idsia.ch/~juergen/fastestuniverse.pdf

Note that if we run such a program, there is no way to know which of the outputs corresponds to our universe. However, our ignorance of where to look does not imply that such a Turing machine has not computed it!

mensetmanusman
” Turing machines can certainly produce results which are "unpredictable", in the sense that the only way to know what will happen is to run the program.”

This doesn’t seem true since all predictions are ‘running the program’

chriswarbo
Not necessarily; lots of systems have 'shortcuts', e.g. I can predict the result of sin(1000000τ) will be 0, without anything physical/virtual having to wiggle up and down a million times. Lots of systems can be described by equations involving a time variable 't', where we can set t = whenever we like, then solve the equation to get the predicted value.

When a system is Turing-complete, equations like this will inevitably involve some expression with t-1; and solving that involves some expression with t-2; and so on back to the initial condition. Hence there's no 'shortcut' (this is a consequence of Rice's theorem BTW, which itself is a consequence of the halting problem).

mensetmanusman
“ e.g. I can predict the result of sin(1000000τ) will be 0, without anything physical/virtual having to wiggle up and down a million times. “

This seems to still involves ‘a program’ in that there are foundational mathematical principals that build your confidence/logic that the periodicity of sin(x) is so well defined that any even integer multiple of pi will be zero. e.g. you could write out the logical ‘programatic’ steps that would also teach a math newbie how to make the conclusion you just made about the sin() function.

chriswarbo
I agree, but such a "program" wouldn't necessarily look/act anything like the "program" 'sin(1000000τ)'.

What's interesting about systems like Turing machines is that the "program" is a specific, isolated thing like a tape of symbols. In contrast, those "foundational mathematical principles" you mention are more of a diffuse web, with lots of interchangable pieces, all mixed up with unrelated facts. This gives lots of different ways to arrive at the answer. With Turing-complete systems, we always have to come back to the code.

Kranar
> But if we take the wavefunction of QM as complete, which Many Worlds does - the most compsci-like interpretation - then randomness is fundamental according to theory.

MWI is an entirely deterministic interpretation of quantum mechanics. Absolutely no randomness is involved which is one of its most appealing properties.

ProfHewitt
There are very simple digital computations that cannot be

performed by a Nondeterministic Turing Machine.

See the following for an example:

https://papers.ssrn.com/abstract=3603021

kaba0
With all due respect, it would be very hard for me to believe so. For the simplest case, one can trivially create a Turing machine that simulates a CPU, so I’m not sure the digital computation holds any water.
ProfHewitt
Issue is that a Turing Machine cannot communicate with other Turing Machines in the middle of a computation.
oscardssmith
This is just false. A Turing machine can emulate multiple Turing machines that communicate with each other.
ProfHewitt
Turing machine can simulate multiple Turing Machines but cannot implement multiple Turing Machines communicating with each other because there is no means to communicate.
oscardssmith
But since Turing machines can emulate multiple Turing machines, any problem that can be completed by multiple machines communicating can be solved by 1 Turing machine. As such, the computational power is exactly the same.
ProfHewitt
As said elsewhere in this posting:

Nondeterministic Turing Machine has only bounded

nondeterminism. That is, for a given input a Nondeterministic

Turing Machine can only explore the number of alternatives

bounded by its input. Actors do not have the limitation.

kaba0
Let me rephrase: an n-core CPU can be simulated on a Turing machine with eg. time sharing — effectively being able to simulate the Actor model.
ProfHewitt
Turing Machine can only do cooperative multiprogramming and cannot do time-interrupted scheduling.
rssoconnor
A Turning Machine can simulate every possible interleaving any concurrent computation.
kaba0
Well, not time-interrupted but there absolutely is a way to schedule n steps of each core, isn’t there? Similarly to how the universal Turing machine is constructed, or how the nondeterministic TM is made deterministic.
ProfHewitt
Nondeterministic Turing Machine has only bounded

nondeterminism. That is, for a given input a Nondeterministic

Turing Machine can only explore the number of alternatives

bounded by its input. Actors do not have the limitation.

tsimionescu
Does this difference still hold if time were discrete? I may be out of my depth, but it seems intuitive that if time were discrete, you could enumerate all possible interleavings of actors' actions, and reproduce their behaviors in a Turing machine.
ProfHewitt
Bounded nondeterminism means that there is a bound

determined by initial input for the number of possibilities

that a system can explore given that it must always come

back with an answer. A Nondeterministic Turing Machine has

the property of bounded nondeterminism because it starts in

a global state and from each state there are finitely many

possible successor states. Because the machine must always

produce an answer, each path must be finite and consequently

the total number of states that can be explored is finite.

In the development of a theory of concurrent computation,

Dijkstra remained committed to a global state model of

computation. This commitment gave rise to the “unbounded

nondeterminism” controversy. Digital systems can be

physically indeterminate by making use of arbiters in order

to resolve reception order in communication between systems.

Because of physical indeterminacy, a digital system can have

the property of unbounded nondeterminism in that no bound

exists when it is started on the number of possibilities

that an always-responding digital system can explore.

Consequently, there are digital computations that cannot be

performed by a nondeterministic Turing Machine. Being

restricted to bounded nondeterminism is one the reasons that

the lambda calculus and Turing Machines models are

inadequate for digital computation.

kaba0
So unbounded nondeterminism is basically hypercomputation? Then I assume one can write a program for the theoretical Actor model that computes a function on every natural number, is that right? Or that it can solve the halting problem for Turing machines, since if it is stronger then it, the halting problem of TMs is no longer true for them (is replaced by their own halting problem).
ProfHewitt
An Actor cannot decide the halting problem for program

expressions. See proof in the following:

https://papers.ssrn.com/abstract=3603021

tsimionescu
That would be a major revolution to the foundations of computer science if it were true, disproving the Church-Turing thesis. It seems unlikely that such a fundamental, world-shattering result would be so obscure, so I am much more likely to believe it is false.
ProfHewitt
Do you have any reasoning to back up your belief?
tsimionescu
As I said, if someone had convincingly proved this, it would shake the field to its core. Since that didn't happen, they can't have convincingly proven this.
ProfHewitt
Scientific progress does occur!

Recent advances are compatible with a long tradition with

contributions by Euclid, Dedekind, Frege, Russell, Church, Turing, von Neumann, etc.

The following article is in furtherance of the long tradition:

https://papers.ssrn.com/abstract=3603021

jjgreen
That's 7 times you've linked to that paper in this discussion, going for some kind of record?
ProfHewitt
Have you read the article?

Can you suggest any improvements?

dahjkol
Don't need to em every 10th word.
Rioghasarig
> This is the philosophical justification for why a (Turing) machine can calculate anything a human can (in fact, the same argument shows that a Turing machine can reproduce the behaviour of any physical system).

I don't think this assertion follows. I don't think an argument like this can work without delving further into reasonable description of a "physical system".

If you just just use the mathematics we employ to describe physical systems unreservedly it is possible to construct "physical systems" that exhibit non-computable behavior. For instance you can have computable and continuous initial conditions to the wave equation that produces a solution that is non-computable. See : https://en.wikipedia.org/wiki/Computability_in_Analysis_and_...

I think it's important to emphasize that Turing stated his arguments in regards to "effective procedure" (which I see you mention in a different post). I don't think the substitution of "effective procedure" with "physical system" is justified.

chriswarbo
> For instance you can have computable and continuous initial conditions to the wave equation that produces a solution that is non-computable.

Thanks, that's a really nice example which I hadn't come across before (or at least not spent too much time studying). I may have to refine the language I use in future; although a cursory look seems to be compatible with my own understanding (my mental model is roughly: "if we found a halting oracle, we would have no way to tell for sure")

> I think it's important to emphasize that Turing stated his arguments in regards to "effective procedure" (which I see you mention in a different post). I don't think the substitution of "effective procedure" with "physical system" is justified.

Yes, Turing did not say as much (at least in his 1936 paper). He was essentially abstracting over 'whatever it is that a person might be doing', in an incredibly general way. Others have since taken this idea and applied it more broadly.

Another useful caveat is that Turing machines are framed as (partial) functions over the Natural numbers. It's quite a leap from there to a "physical system". An obvious example is that no matter how cleverly we program a Turing machine, it cannot wash the dishes; although can simulate the washing of dishes to arbitrary precision, and it could wash dishes by controlling actuators if we decided to attach some (but even that would run into problems of time constraints; e.g. if calculating the first instruction took so long that the water had evaporated).

wizzwizz4
The problem with your assumption is that you're assuming there are a finite number of states. There might be an uncountably infinite number of states, for instance if the states were the reals between 0 and 1.
chriswarbo
Which assumption are you referring to? If it's about the states in a finite region, note that I specifically limited this to distinguishable states.

Whether or not a finite region can have an infinite number of states (countable or otherwise) is irrelevant; we can only distinguish finitely many in finite time. Two states being indistinguishable means they'll give rise to the same output behaviour (e.g. from a mathematician in a room, carrying out some procedure).

sillysaurusx
In traditional HN style, I will now thumb my nose at Vertasium's excellent video due to him skipping over how godel came up with the proof.

He built up the whole damn video for that one moment, when he says "And godel went through all this trouble to come up with this card..." and flips over the card. But how godel did it is at least as cool as the proof itself.

I'm mostly being tongue in cheek with the Veritasium nose-thumbing. But if you saw it and want to understand it more deeply (and in my opinion more satisfyingly), I recommend "What Godel Discovered": https://news.ycombinator.com/item?id=25115746

It explains Godel's proof using Clojure. Quite accurately, too.

The thread is also hilarious. Shoutout to the MIT professor who took offense at someone linking to his wikipedia page: "Wikipedia is contentious and often potentially libelous." ... and then you check his wikipedia page, and it's a super positive article about all of his discoveries and life's work.

ProfHewitt
A fundamental problem with Wikipedia is that it doesn't have

proper accountability for material that it publishes.

Furthermore, Wikipedia is heavily censored by an anonymous group.

See [von Plato 2018] on how Gödel deceived von Neumann in

developing one of his fundamental proofs in [Gödel 1931].

koheripbal
I agree... while the video is great - it does have a pretty big hole there.
saurik
That wikipedia article is edit-locked and if you look at its history it is, in fact, hilarious.
ProfHewitt
Why do you find Wikipedia censorship hilarious?
saurik
Because it is? It isn't quite "ridiculous", as it is somewhat more amusing than that. It certainly isn't "funny", FWIW, as I have absolutely no positive connotations with it... "hilarious" feels like the correct term.
ProfHewitt
Hilarious means "extremely amusing", which is your position on censorship?
gpderetta
for those that haven't clicked through, the MIT professor is none other than Carl Hewitt of actor model fame.

I should post the wiki link so that we can have the whole discussion again :)

Workaccount2
>Carl Hewitt of actor model fame

I'll admit that threw me for a loop for a second.

gpderetta
:) Sorry, I guess it ended up being some sort of https://en.wikipedia.org/wiki/Garden-path_sentence
espadrine
> Shoutout to the MIT professor who took offense at someone linking to his wikipedia page […] and it's a super positive article about all of his discoveries and life's work.

Carl Hewitt is no doubt a famed and accomplished professor, but why does nobody point out that he is claiming that Gödel’s proof is incorrect[0]?

I already heard of his conviction that “The Church/Turing Thesis is that a nondeterministic Turing Machine can perform any computation. […] [T]he Thesis is false because there are digital computations that cannot be performed by a nondeterministic Turing machine.”[1]. (Which, to be fair, is a strongly related argument to the one that Gödel was wrong.)

I don’t personally know whether he is right, and his digression about millions of people risking death as a result does not inspire confidence, but in my understanding, neither statement is widely accepted.

[0] https://news.ycombinator.com/item?id=25160970

[1] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3603021

myWindoonn
I don't want to be a professional Hewitt debunker, but his stance has been ripped to shreds; at this point, he is wrong about Direct Logic refuting Gödel and Turing, and his ActorScript can't be implemented on real machines.
mcguire
"...his ActorScript can't be implemented on real machines."

I think I understand why, but could you provide some more details?

myWindoonn
Loosely, because ActorScript requires some sort of genuine non-determinism where an actor explores multiple states simultaneously, we need some sort of magic non-deterministic computer. A quantum computer isn't enough; simulating an ActorScript program is NP-complete.

Maybe there can be physical machines which run ActorScript, but I would expect them to come with new laws of physics, and also I think that there are probably more comfortable programming languages. We already have https://en.wikipedia.org/wiki/Constraint_Handling_Rules for example.

mcguire
Thanks!
joelbluminator
Maybe he should try writing it in Elixir instead of ActorScript?
weatherlight
Hewitt argues that erlang/elixir/beam isn't the actor model because of how Erlang processes receive/process messages

more here: https://www.erlang-solutions.com/blog/lets-talkconcurrency-p... and here: https://codesync.global/media/almost-actors-comparing-pony-l...

joelbluminator
Haha that actually happened nice!
ProfHewitt
Of course, Erlang can be accurately modeled using the theory of Actors.

See discussion in Related Work section of the following article:

https://papers.ssrn.com/abstract=3603021

weatherlight
Prof. Hewitt himself!
alisonkisk
What's wrong with ActorScript? Does it have some kind of infinite-size features?

More interesting facts:

His Twitter account has been relatively recently deleted

https://mobile.twitter.com/drcarlhewitt/status/3972794314824...

And his PhD advisor was Seymour Papert.

ProfHewitt
Twitter account is @ProfCarlHewitt and blog is

https://professorhewitt.blogspot.com/

See following for a simple ActorScript program that cannot be

implemented using a Nondeterministic Turing Machine:

https://papers.ssrn.com/abstract=3603021

ProfHewitt
myWindoonn: Do you have anything of actual substance to

contribute to the the discussion?

myWindoonn
We found http://www.laputan.org/pub/papers/aim-353.pdf the other day. How's it feel to know that folks have been calling you out on your incorrectness since the 1970s?
alecst
I don't know anything about Hewitt or his stance, but I don't feel like this adds a whole lot to the conversation. I mean I can tell that you're sure that he's wrong, but why should the strength of your belief convince me?

It would be helpful if you added some links or some explanation, or just anything beyond what you've written there.

myWindoonn
Suppose that Turing is wrong. Then Turing gave an algorithm which should defeat Rice's Theorem. Therefore, I may simply ask, to Hewitt: Could you please write up a short program in a popular Turing-complete programming language which proves or disproves Goldbach's Conjecture? It should be trivial!

Of course, on actual computers, Turing's result applies, Rice's Theorem manifests, and such a program will likely run indefinitely, until we get bored and give up.

The point that Hewitt has made in another thread is typical misdirection. See, it doesn't matter which order of logic we're talking about; in general, we have results for higher order logic, all the way up to Lawvere's fixed-point theorem for any Cartesian closed category. To fight this, Hewitt has to disclaim the entirety of 20th-century mathematics. (At least he's internally consistent -- he does deny modern maths!)

Sad fact is that, ever since 1976 (http://www.laputan.org/pub/papers/aim-353.pdf) folks have recognized that Hewitt's claims about the foundations of computation are not just wrong, but flamebait. But he never had the humility to learn from his mistakes.

ProfHewitt
Veritasium, et. al. overlooked the crucial aspect that

orders on propositions play in maintaining the consistency

of foundations.

Neglecting to include the order of a proposition in in its

Gödel number makes it possible to mistakenly conclude that

the proposition I'mUnprovable exists as a fixed point of a

recursive definition.

See the following for details:

https://papers.ssrn.com/abstract=3603021

kaba0
It’s similar to claiming someone made a perpetuum mobile — extraordinary claims going against basically every mathematician/computer scientist since Gödel require extraordinary evidence. And the result is quasi always that the one claiming, looked over some small but crucial detail.
ProfHewitt
Veritasium, et. al. overlooked the crucial aspect that

orders on propositions play in maintaining the consistency

of foundations.

Neglecting to include the order of a proposition in in its

Gödel number makes it possible to mistakenly conclude that

the proposition I'mUnprovable exists as a fixed point of a

recursive definition.

See the following for details:

https://papers.ssrn.com/abstract=3603021

deadbeef57
I know of several computer verified proofs of the incompleteness theorem. See entry 6 of https://www.cs.ru.nl/~freek/100/ for a small catalogue. How does this relate to your objection? Are there bugs in the verifiers? Or did people verify the wrong theorem statement?
ProfHewitt
Proofs forgot to include order of proposition in its Gödel number.
deadbeef57
So you say that the computer verified proofs of the incompleteness theorem of verifying the wrong theorem? Because there can't be bugs in a computer verified proof.

(Of course there can theoretically be bugs in a computer verified proof. If there are bugs in the verifier. But if 4 independent proof assistants verify a proof of a theorem, this is extremely unlikely.)

ProfHewitt
A proof is only as good as definitions and axioms used in the proof.

All of the computer-verified proofs of existence of [Gödel

1931] proposition I'mUnprovable left out the order of

proposition in its Gödel number :-(

espadrine
Isn’t the order of a proposition included in its Gödel number?

Each proposition is assigned to an increasing prime power, and the increasing list of primes has total order, such that swapping propositions yields a distinct Gödel number.

ProfHewitt
[Gödel 1931] encoded the characters of a proposition in its

Gödel number but did not include the order of the proposition.

espadrine
Could you provide two statements with equal Gödel numbers but distinct proposition orders, and a detail of how you compute the Gödel numbers?
ProfHewitt
If two propositions have the same characters, then they have the same Gödel numbers.
espadrine
Could you provide two statements with equal Gödel numbers and the same characters in different orders, and a detail of how you compute the Gödel numbers?
ProfHewitt
If two propositions have the same characters in the same order,

then they have the same Gödel numbers.

Unfortunately, Gödel number of a proposition leaves out the order of the proposition :-(

espadrine
If two statements have the same characters in the same order, how are they not the same statement? And why would that make Gödel’s statement invalid in Principia Mathematica?
ProfHewitt
There might be two propositions of different orders that have the

same characters in the same order.

tsimionescu
I think what ProfHewitt means here (based on other writing I've found, as he is frustratingly low on details in these conversations) is "order" in the sense of "first order logic", "second order logic"; not in the sense of "first proposition, second proposition" etc.

His claim is that the proposition "This proposition is not provable", formalized as "P equivalent to P is not provable" is not well formed in a typed logic, as "P"'s type is first-order, while "P is not provable"'s type is second-order. Therefore, his claim is that the proposition is simply not well-typed and therefore not interesting. Godel's proofs were discussing an untyped logic, but according to Hewitt that is not an accurate representation of mathematics.

I don't think anyone in the space agrees with him, though, as far as I could tell from some cursory reading.

beltsazar
That Veritasium's video is great! I'd also recommend Computerphile's 3-part video series (~10 mins each) which tells the history of undecidability and the incompleteness theorem:

https://www.youtube.com/watch?v=nsZsd5qtbo4

https://www.youtube.com/watch?v=FK3kifY-geM

https://www.youtube.com/watch?v=lLWnd6-vSGo

This was a pair of videos in Veritasium recently - How An Infinite Hotel Ran Out Of Room - https://youtu.be/OxGsU8oIWjY and Math Has a Fatal Flaw - https://youtu.be/HeQX2HjkcNo .
Hi YeGoblynQueenne,

I always read your logicy comments and posts--thank you!

> To clarify, that's incompleteness. Unsoundness is when an inference rule returns wrong results. Incompleteness is where it doesn't return all correct results. As you say, unification without an occurs check is unsound and Prolog's depth-first search is incomplete.

Thank you. Yes, I misspoke.

For anyone interested, this video gives a nice overview of soundness and completeness, in the context of logic: https://www.youtube.com/watch?v=HeQX2HjkcNo

> I wanted to ask, because you brought it up in the interview, how does minikanren (or minikanrens?) handle the occurs check? You said that you are interested in trying to push the relational aspect as far as possible, recognising that this gives up efficiency - can you quantify to what extent minikanren gives up efficiency and generally say a bit more about the trade-off between (if I understand it correctly) efficiency and relational (declarative?) purity?

miniKanren performs unification with the occurs check. This is one of the tradeoffs miniKanren makes: sound unification, but it can be expensive. The interleaving complete search is another tradeoff: the search uses more time and memory than does depth-first-search, in those cases in which DFS would be sufficient. Another tradeoff is that miniKanren avoids the use of non-logical / extra-logical operators. There are no cuts, projected variables, var tests, etc., in the pure parts of the language (which is the version I always use). If you want to perform arithmetic, you either have to encode arithmetic as a pure relation (as Oleg Kiselyov has done), or add a fully-relational constraint system for arithmetic to miniKanren (as we did in cKanren). Or, you can add a delayed goals mechanism, although this can lose completeness.

There are lots of variants of miniKanren at the this point: alphaKanren (nominal logic programming); cKanren (constraints); probKanren (prototype probabilistic Kanren); dbKanren (Greg Rosenblatt's extension to handle graph database operations, used in mediKanren 2); etc. I hope we can integrate all of the features into a single implementation some day.

Cheers,

--Will

YeGoblynQueenne
>> I always read your logicy comments and posts--thank you!

You're welcome and thank you for your work on logic programming :)

Thank you also for the discussion of the trade-offs in minikanren. I appreciate the quest for declarative purity, although I'm a very impure individual myself (the dynamic database is evil and I'm going straight to hell :). In general, my opinion is that Prolog's approach is pragmatic, but it was designed in the 1970's when hardware was weaker. With modern hardware, there may be different options that better address the need for purity without sacrificing efficiency.

Modern Prolog interpreters generally have an option to enable the occurs check (for example, in Swi-Prolog there's an environment flag to enable it for all unifications) and there's also the ISO predicate unify_with_occurs_check/2, that can be used in meta-interpreters, I guess. I think this addresses the soundness concern.

Regarding incompleteness, some modern Prologs use SLG-resolution (a.k.a. tabling) to at least avoid the incompleteness resulting by infinite left-recursions. This still allows infinite right recursions but it's easy to get in such a situation with recursion, or even just iteration, in any Turing complete language. I feel that tabling goes a long way towards addressing incompleteness.

This leaves the issues of the cut and arithmetic. Personally, I'm very comfortable with the cut, although I recognise it's an issue, especially for beginners - and there's some standard uses of the cut that are almost mandatory (e.g. in the first recursive clause in a predicate definition with two recursive clauses). For declarative arithmetic, there are solid libraries for CLP over various integer or real domains (Swi-Prolog has a few, the work of Markus Triska, who often also comments here). I didn't know about Oleg Kiselyov's work, so I'll have to look it up because it sounds intriguing.

I think that with such modern additions (i.e. apart from the cut) Prolog moves closer to the declarative ideal without sacrificing its all-around usability as a general purpose language. What do you think?

Finally, I'd like to know more about minikanrens' search. I'll look around on the internet, but is there a source you would recommend?

will_byrd
> In general, my opinion is that Prolog's approach is pragmatic, but it was designed in the 1970's when hardware was weaker. With modern hardware, there may be different options that better address the need for purity without sacrificing efficiency.

I agree. We are investigating various approaches to improve the efficiency of the search, while retaining completeness, for example.

> Regarding incompleteness, some modern Prologs use SLG-resolution (a.k.a. tabling) to at least avoid the incompleteness resulting by infinite left-recursions. This still allows infinite right recursions but it's easy to get in such a situation with recursion, or even just iteration, in any Turing complete language. I feel that tabling goes a long way towards addressing incompleteness.

I agree that tabling is useful. Tabling is useful even with a complete search, since it can cut off certain cases in which there are infinitely many answers of similar form, and can change the complexity class of queries.

A section of my dissertation discusses adding tabling to miniKanren, based on code Ramana Kumar and I worked on, after discussions and readings with Dan Friedman:

https://github.com/webyrd/dissertation-single-spaced

That tabling implementation is very tricky, though, and doesn't support constraints other than unification. It's time to revisit and update tabling in miniKanren, I think.

> Modern Prolog interpreters generally have an option to enable the occurs check (for example, in Swi-Prolog there's an environment flag to enable it for all unifications) and there's also the ISO predicate unify_with_occurs_check/2, that can be used in meta-interpreters, I guess. I think this addresses the soundness concern. > > This leaves the issues of the cut and arithmetic. Personally, I'm very comfortable with the cut, although I recognise it's an issue, especially for beginners - and there's some standard uses of the cut that are almost mandatory (e.g. in the first recursive clause in a predicate definition with two recursive clauses). For declarative arithmetic, there are solid libraries for CLP over various integer or real domains (Swi-Prolog has a few, the work of Markus Triska, who often also comments here). I didn't know about Oleg Kiselyov's work, so I'll have to look it up because it sounds intriguing. > > I think that with such modern additions (i.e. apart from the cut) Prolog moves closer to the declarative ideal without sacrificing its all-around usability as a general purpose language. What do you think?

I also used to think that Prolog was moving closer to the delcarative ideal. And I suspect that most expert Prolog programmers believe similarly.

However, my attitude has changed in the past few years, after seeing Prolog programmers try to implement the relational interpreter from the 2017 ICFP pearl in Prolog.

The problem is one of composing the pure features needed for expressing a program as complicated as the relational interpreter. (Of course, the relational interpreter is designed to be as short and simple as possible, bit it is still more complicated than pure relations you will find in a Prolog textbook, for example.)

In theory, you can easily combine unification with the occurs check, SLG-resolution, disequality/disunification constraints, type constraints, etc., and avoid all impure uses of extra-logical features. In practice, I've seen people struggle to combine these features within a single Prolog implementation. In fact, after multiple attempts from multiple Prolog experts, I have yet to see a version of the relational Scheme interpreter in Prolog that can match the behavior of the miniKanren version.

I'm not claiming that someone couldn't implement a full relational interpreter in Prolog. Obviously they could, since Prolog is Turing-complete. I'm only saying that the default choices of Prolog, plus the intricacies of combining multiple pure features (and leaving out standard and useful non-relational features), seems to make writing a relational interpreter much harder than I would have expected.

If you are up for a challenge, I'd be happy to work on a Prolog version of the Scheme interpreter with you!

Based on what I've seen from Prolog experts trying to reproduce the relational interpreter, and conversations with Prolog experts, my current thinking is that Prolog is actually not a good language for relational programming. Too many of the defaults must be adjusted or overridden, too many standard techniques and idioms must be abandoned, and too many potentially incompatible libraries must be composed in order to get everything "just right."

The problem isn't that relational programming in Prolog isn't possible. The problem is that it requires enough work and non-standard techniques that in practice people don't do it.

At least, that is what I observe as an outsider.

If there is an easy way to combine features within a single FOSS Prolog implementation to allow for the types of relational programming we do in miniKanren, I'd be delighted to see it in action. I'd also be delighted to write a paper about it with you! :)

> Finally, I'd like to know more about minikanrens' search. I'll look around on the internet, but is there a source you would recommend?

I would recommend reading Jason Hemann and Dan Friedman's lovely little paper on microKanren, which gives a tutorial reconstruction of miniKanren's complete interleaving search, starting from depth-first search:

http://webyrd.net/scheme-2013/papers/HemannMuKanren2013.pdf

https://github.com/jasonhemann/microKanren

The standard exercise for learning microKanren is to translate the resulting ~50 lines of Scheme code into the language of your choice.

You might also like this more technical paper on the `LogicT` monad:

Backtracking, interleaving, and terminating monad transformers: (functional pearl) Oleg Kiselyov, Chung-chieh Shan, Daniel P. Friedman, and Amr Sabry. Proceedings of the Tenth ACM SIGPLAN International Conference on Functional Programming ICFP 2005, Tallinn, Estonia

https://dl.acm.org/doi/10.1145/1086365.1086390

http://okmij.org/ftp/papers/LogicT.pdf

Cheers,

--Will

YeGoblynQueenne
>> If you are up for a challenge, I'd be happy to work on a Prolog version of the Scheme interpreter with you!

Oh gosh, don't do that to me :) I just finished submitting my second paper and I have to write my thesis, plus another couple of side projects already and a hell of a lot of work on my Louise ILP system. Maybe we can talk about this again after March 2022? When I should be finished with my thesis?

Note however I may not be the best person for this job. I'm serious when I say I'm "impure". I mean that I'm not very interested in declarative purity. For me it's more like an ideal goal than something I absolutely feel I need. I suppose I've become all too comfortable with Prolog's er pragrmatic hacks, that I don't feel I need them gone.

That said, it's certainly an interesting challenge :)

>> The problem isn't that relational programming in Prolog isn't possible. The problem is that it requires enough work and non-standard techniques that in practice people don't do it.

I hope I don't misunderstand what you mean by relational programming. I think you mean pure logic programming, _not_ like implemented in Prolog, but as it's described, say, in Lloyd? Then yes, I agree. It is often infinitely easier to use Prolog's impure facilities, like the dynamic db and so on, to complete a complex piece of code. I guess the quintessential example is findall/3 and friends- impossible to implement in pure Prolog, without a dynamic db (er, far as I know).

In general, I find myself resorting to using impure features when I can't find an efficient way to write a program without them. For instance, sometimes I want to use random access data structures, over linked lists (so, Prolog "terms" with setarg/3 arg/3, rather than list traversal). Othertimes it's just so much easier to bang together a findall/3 with an arg/3 counter, than write yet another recursive skeleton with 4 accumulators- and an interface predicate to hide them.

To be honest, a purely declarative language that took those impure but convenient hacks away would take me some time to get used to. I don't know that I want to give those conveniences up. They're there for a reason- a tradeoff of purity for efficiency and usability. I, personally, find that tradeoff balanced and while I'm not against going the other way (sacrificing efficiency and ease of use for purity), as a research goal, I'm not going to go out of my way to try and achieve that.

>> I'd also be delighted to write a paper about it with you! :)

I am very flatterred :) But I'm letting you down, ain't I? I'll bring up Markus Triska again, as someone I understand to be a dedicated Prolog programmer, also dedicated to declarativey purity. Maybe he'be up to it.

Thank you for the recommended reading on minikanren's search!

will_byrd
> Oh gosh, don't do that to me :) I just finished submitting my second paper and I have to write my thesis, plus another couple of side projects already and a hell of a lot of work on my Louise ILP system. Maybe we can talk about this again after March 2022? When I should be finished with my thesis?

Very nice! ILP is extremely interesting, especially for program synthesis.

I'd be happy to talk again after your thesis!

> I hope I don't misunderstand what you mean by relational programming. I think you mean pure logic programming, _not_ like implemented in Prolog, but as it's described, say, in Lloyd?

Correct. In particular, I mean logic programming in a system with the following properties:

1) sound unification

2) complete search

3) all predicates work in all-modes, all the time

4) [optional, but extremely useful] additional relational constraints--such as disequality constraints--that work in all modes, support limited forms of negation, and can express some otherwise infinite sets of constraints finitely

> In general, I find myself resorting to using impure features when I can't find an efficient way to write a program without them. For instance, sometimes I want to use random access data structures, over linked lists (so, Prolog "terms" with setarg/3 arg/3, rather than list traversal). Othertimes it's just so much easier to bang together a findall/3 with an arg/3 counter, than write yet another recursive skeleton with 4 accumulators- and an interface predicate to hide them.

I, too, have played in the muddy world of impure-but-convenient / not-quite-logic programming. This world is similar in spirit to almost-functional-programming, with the ocassional mutation and unconstrained effect thrown in. Mutation is powerful and useful. Unconstrained effects are powerful and useful.

On the other hand, even a single use of these features can destroy a host of program invariants that could be used for reasoning, analysis, or optimization. Or for running programs "backwards." Or for semantics-guided synthesis...

The lesson I hope Prolog programmers will take from miniKanren is that they are giving up something very powerful when using these extra-logical operators, in these operators destroy the ability to soundly run predicates in all modes, and that sophisticated all-modes relations can perform computations that would be difficult to express in other ways.

> To be honest, a purely declarative language that took those impure but convenient hacks away would take me some time to get used to. I don't know that I want to give those conveniences up.

These features are very difficult to give up, similar to the way mutation and uncontrolled effects are difficult to give up when writing programs in a lazy functional language. There is no doubt that it requires a total change in programming style, giving up convenient and powerful idioms, and often having to rethink approaches to efficiency.

I remember the first time I heard about functional programming. After programming for 20 years, I had to re-learn the notion of 'variable'. (Just like I had to re-learn the notion of 'variable' when I learned logic programming.)

This is the main reason I think Prolog isn't suitable for relational (pure logic) programming. The language design, compilers, libraries, the WAM, books and tutorials, research papers, publicly available code, etc., assume non-relationality by default. If you want to program purely in Prolog, you have to know which parts of the language to avoid (almost everything!) and which libraries to avoid (most of them, as far as I can tell), in addition to knowing exactly which combination of optional features to use to enable relational programming. You won't find this described in any Prolog book or tutorial that I have seen, beyond the famous append` example, so you are on your own to develop a style of programming that is at odds with the ecosystem in which you work. It's the same feeling I get whenever I try to write purely functional code in a language in which mutation is the default, and in which almost all libraries use mutation.

Relational programming requires a different point of view, which is usually at odds with standard Prolog practice.

> They're there for a reason- a tradeoff of purity for efficiency and usability. I, personally, find that tradeoff balanced and while I'm not against going the other way (sacrificing efficiency and ease of use for purity), as a research goal, I'm not going to go out of my way to try and achieve that.

I agree--it's all tradeoffs! And I agree that it is such a different way of programming, with a totally different set of idioms, advantages and disadvantages, performance issues, etc., that you would have to go far out of your way to program in the purely relational style. And, of course, programming relationally may provide little or no benefit for what you want to accomplish.

YeGoblynQueenne
>> 3) all predicates work in all-modes, all the time

I think this is the feature that is the most sorely missed in Prolog programming and as you say it is possible to have it, but only if one is prepared to jump through no end of hoops. Sometimes it's just not possbile with any reasonable amount of effort and any sensible amount of coding. This feature is also the one most characteristic of relations, rather than functions, or the demi-functions that Prolog predicate definitions end up as (i.e. with more or less standard inputs and outputs, a.k.a. modes).

I think I detect a hint of frustration in your comment, perhaps after one too many encounter with recalcitrant Prolog programmers, unwilling to give up their convenient hacks :)

Well, these days I can wear another hat, that of the ILP researcher, and there is an interesting use of modes in ILP: they are often used to reduce the size of the hypothesis search space. Because, of course, the hypothesis space for the program append(+,+,-) is much smaller than the hypothesis space for the program append(?,?,?)! Aleph, probably the best-known ILP system of all time (and its ancestor, Progol) has a special syntax of "mode declarations" to determine input and output modes, for this reason.

Interestingly, the more recent approach (attention: I'm about to talk about my PhD work :) of Meta-Interpretive Learning (MIL), does away with specific modes and Aleph and Progol's (and many other systems') mode declarations. Instead, MIL encodes inductive bias by means of second-order datalog clauses, the "metarules" (e.g. ∃P,Q,R,X ∀x,y,z: P(x,y) ← Q(x,z), R(z,y,X) is a metarule with second-order variables P,Q,R existentially quantified over the set of predicates, first-order variable X existentially quantified over the set of constants and first-order variables x,y,z universally quantified over constants). Metarules do not specify input and output modes -they are, after all, datalog. This would normally blow up the hypothesis space to unsearchable proportions, but this is managed in various ways- and an actual search of the hypothesis space (i.e. the set of sets of clauses) can be avoided alltogether, for a polynomial time learning approach.

But I digress. What I mean to say is that this theoretically mode-free learning has so far only been implementedin Prolog [1][2][3] or ASP [4]. ASP is purely declarative, but incomplete in its own way (because the Herbrand base must be fully ground, which is impossible to do for infinite Herbrand bases) and Prolog has- the mode problem we discussed. So while we have a powerful learning approach, with a nice theoretical basis, in practice we implement it in a less than satisfying manner, always. And, I dare say I've seen the result in annoying "edge cases". So, after our discussion, I wonder if a minikanren implementation of MIL would overcome the limitations of the Prolog and ASP implementations - even at the cost of expensive occurs-checks etc. I mean, at least if one gives up efficiency, there should be an equally important reward, and an in-practice realisation of the in-principle soundness and completeness of the approach would probably be that. In my opinion anyway. At the very least it's a more compelling reason to re-implement a system in a new language than "well, it's an implementation in a new language".

Food for thought, post-thesis :)

____________

[1] Metagol:

https://github.com/metagol/metagol

(Try to ignore the scary header)

[2] Thelma:

https://github.com/stassa/thelma

[3] Louise:

https://github.com/stassa/louise

[4] Hexmil:

https://arxiv.org/abs/1805.00068

will_byrd
The Gödel programming language from Patricia Hill and John W. Lloyd is, to me, a good example of what a Prology language would look like, if designed from the ground-up for relational programming.

I also suspect lambdaProlog would make for a nice foundation for relational programming, if the search were improved.

I'm not sure if it is what you want, but Intuitionism [1] is one area that challenges modern fashions in Mathematical thinking. It suffered greatly under the formalist approach lead by David Hilbert and still has little main-stream support despite Gödel and his incompleteness proofs. Veritasium's latest video on Gödel's incompleteness [2] gives a pretty fair account of how we settled on the current fashionable foundations of Math (including nods to Cantor and Hilbert). For a more formal history there is a book "The Philosophy of Set Theory" [3] that sketches out how we got to where we are.

I have always been unsatisfied with the current foundations of math and their obtuse basis in Set Theory. Although I must admit, Category Theory has alleviated that quite a lot, especially with the relaxing of equivalence compared to equality.

1. https://en.wikipedia.org/wiki/Intuitionism

2. https://www.youtube.com/watch?v=HeQX2HjkcNo

3. https://www.maa.org/press/maa-reviews/the-philosophy-of-set-...

eigenket
I am somewhat familiar with the standard foundation of maths in set theory, and slightly less familiar with the program to ground maths in Category theory (although I do know a fair bit of category theory).

Maybe I am biased but I do not find the category theory foundations any less obtuse than the standard formulations. Like its pretty cool that you can do this stuff with category theory, but I think there is a reason the set theory was done first.

reggieband
I didn't meant to imply that Category Theory addressed the obtuseness of Set Theory. Rather, I was alluding to work in Category Theory that helps to redefine our ideas of equality and equivalence. A discussion of that is available in Quanta Magazine [1].

1. https://www.quantamagazine.org/with-category-theory-mathemat...

May 25, 2021 · 38 points, 1 comments · submitted by jdkee
civilized
> No one knows what those [true but unprovable] statements are exactly

Not quite - we do know of some because Gödel's Incompleteness Theorem constructs them.

DerekL
Dupe: https://news.ycombinator.com/item?id=27248170
May 22, 2021 · 39 points, 8 comments · submitted by xbmcuser
CheezeIt
It’s 33 minutes long and it starts off with a digression into Conway’s game of life. These math YouTube channels have the same vibe as those creepy YouTube videos for toddlers.
crowf
veritasium used to be a really good channel, but a few years ago he tasted what it was like to have a viral video. since they he basically admitted (somewhere near the end of https://m.youtube.com/watch?v=fHsa9DqmId8 ) that from then on he will be making clickbait style videos
MattRix
Clickbait is more about titles and thumbnails and less about the content itself. The content of this video is hardly what I would consider clickbait.
MattRix
Oh come on, it’s a very well made video and it made perfect sense to start with Game of Life.
CheezeIt
It’s a big bowl of sugar.
brzozowski
There are many bizarre visualizations that are solely cosmetic and serve no purpose other than to attract the eyes. I've noticed the same pattern with many other videos churned out by popular science channels. The visual editing is delirious and the narration is devoid of much intellectual substance. Hard to pinpoint exactly what's wrong with it, it's just extremely bland and borderline uncanny valley.
MattRix
That’s because those things make the video more interesting to watch. It’s targeted at a more general audience, not at scientists.
elzbardico
This video is targeted at the general public, not computer scientists or mathematicians. I think it does a great way of explaining complex concepts for lay people and make mathematics feel accessible and interesting. I can imagine my teenage step-daughter watching this video without getting bored to death, and seeing math as something interesting, a task in which I've been failing miserably so far.
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.