HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
The Man Who Revolutionized Computer Science With Math

Quanta Magazine · Youtube · 238 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Quanta Magazine's video "The Man Who Revolutionized Computer Science With Math".
Youtube Summary
Leslie Lamport revolutionized how computers talk to each other. The Turing Award-winning computer scientist pioneered the field of distributed systems, where multiple components on different networks coordinate to achieve a common objective. (Internet searches, cloud computing and artificial intelligence all involve orchestrating legions of powerful computing machines to work together.) In the early 1980s, Lamport also created LaTeX, a document preparation system that provides sophisticated ways to typeset complex formulas and format scientific documents. In 1989, Lamport invented Paxos, a “consensus algorithm” that allows multiple computers to execute complex tasks; without it, modern computing could not exist. He’s also brought more attention to a handful of problems, giving them distinctive names like the bakery algorithm and the Byzantine Generals Problem. Lamport’s work since the 1990s has focused on “formal verification,” the use of mathematical proofs to verify the correctness of software and hardware systems. Notably, he created a “specification language” called TLA+ (for Temporal Logic of Actions), which employs the precise language of mathematics to prevent bugs and avoid design flaws. Read more at Quanta Magazine: https://www.quantamagazine.org/bringing-mathematical-perfection-to-software-20220516/

Quanta Magazine is an editorially independent publication supported by the Simons Foundation.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
May 24, 2022 · 229 points, 91 comments · submitted by chat
uvdn7
There is no distributed system without Lamport. His Time, Clocks and Ordering paper to distributed system is like Ted Codd’s A Relational Model paper to database (which is also math!). His elegant analogy between the special relativity, space-time diagram, reasoning distributed systems as state machines, and TLA+; each one is breathtaking.

Paxos is so beautiful. It’s a piece of art. It’s way more elegant and beautiful than Raft (as it solves the smallest consensus problem instead of a log). I will die on this hill.

butterisgood
Viewstamped Replication (the other VR), is also very lovely. https://pmg.csail.mit.edu/papers/vr-revisited.pdf
_benedict
I think you maybe overstate it, but I agree Paxos is nicer than Raft. The papers in which we outlines it are awful though.

Raft is more understandable, because they wrote a paper that was easy to understand. You are much better off reading Heidi Howard to understand Paxos, than Lamport.

gralx
Lamport's early Paxos papers are interesting for a lot of reasons (I particularly like his simple textbook definition of equivalence classes in "Generalized Consensus and Paxos" (2005)), but I agree that the Paxos algorithm gets lost in all the fluff.

In 2016 some researchers at Stony Brook published the full (Multi-) Paxos algorithm using TLA+, including a couple of nice extras:

https://arxiv.org/abs/1606.01387

The spec is succinct enough to commit to memory and way easier to comprehend than Lamport's prose descriptions (Lamport has never specified the Paxos algorithm in TLA+, although you can find his TLA+ spec for Paxos consensus). The paper also includes a mechanically checked proof and an interesting overview of other related work.

LAC-Tech
I have that paper printed out on my desk :) Won't pretend I understand it 100% but it taught me a lot about distributed systems.
fizwhiz
TIL Edgar Codd went by Ted.
frogulis
I don't think I'd even heard his name without the "F." initial in the middle either.
adolph
The TLA+ Lectures are awesome:

https://www.youtube.com/watch?v=p54W-XOIEF8

[suddenly in a clown costume] "What kind of clown am I, claiming I can make you think better? Why should you pay attention to me?"

[in a suit] "This is not the time to be modest. I have done seminal research in the theory of concurrent and distributed systems for which I won the Turing award. You can stop the video now and look me up on the web." [long pause]

gralx
Easter egg: at one point you can hear him typing on an IBM buckling spring keyboard. Something for the keyboard enthusiasts :)
_448
At the end of explaining the "bakery algorithm", he says "I am proud that I stumbled on it" He doesn't say "I invented it", "I came with it", "I wrote it", etc, etc.

In my career I have seen that people who are true geniuses are also very humble!

talaketu
Indeed - though I think this is a mathematical philosophy about theories. "Genius" is a nice word choice to emphasize your point - it refers to some kind of distinct spirit that happens to inspire the person, not the person themself. (A little like writing vs typing.)
andrepd
Absolutely. Conversely, I find that the people who are exceedingly eager on self-aggrandisement are usually mediocre.
clpm4j
100% agreed. Arrogance is a sure sign of brittleness.
lifefeed
There's a list of his papers with little notes by him on every one at https://lamport.azurewebsites.net/pubs/pubs.html . His casual notes are themselves an absolute education.

My favorite is on "Time, Clocks and the Ordering of Events in a Distributed System", where he applies the lessons of special relativity to understand computers, and he says:

> Jim Gray once told me that he had heard two different opinions of this paper: that it's trivial and that it's brilliant. I can't argue with the former, and I am disinclined to argue with the latter.

lanstin
I think if one thoroughly understands this paper then all the rest of distributed systems theory makes sense.
nanna
Leslie Lamport built Knuth's TeX into the user-friendly version to which he appended the first two letters of his surname: LaTeX. He wrote an excellent, highly accessible guide, LaTeX: A Document Preparation System, which I wish I read before setting off on a thousand random web pages.
globular-toast
Why did I never notice the "La" came from his name? Most people pronounce it like the rubber you get from a tree but my PhD supervisor pronounced it more like "Lah-Tech". I guess that is the correct pronunciation!
contingencies
This paper is a case study of program evolution. The author kept track of all changes made to TEX during a period of ten years, including the changes made when the original program was first debugged in 1978. The log book of these errors, numbering more than 850 items, appears as an appendix to this paper. The errors have been classified into fifteen categories for purposes of analysis.

I came to the conclusion that the designer of a new system must not only be the implementor and the first large-scale user; the designer should also write the first user manual. - Donald Knuth, 'The Errors of TeX' (1989) https://yurichev.com/mirrors/knuth1989.pdf (4.8MB)

... via https://github.com/globalcitizen/taoup

dboreham
His later work prompted me to learn Order Theory, which has turned out to be useful for all sorts of things. Also quite closely related to Category Theory which I wouldn't have had much chance of grokking without first understanding Order Theory, I suspect.

I also used LaTeX heavily in the 80s so was surprised to see him pop up as a genius of distributed systems later (although that work was published much earlier it didn't get much exposure until the 90s). Like "oh that guy must be _really_ smart to excel in two quite different fields".

mhh__
He wrote some interesting stuff on mathematics and physics in the 60s too but it's all lost to time apparently.
SkyMarshal
Are there any good resources you recommend for learning Order Theory?
manu3000
an introductory blog: https://matt.might.net/articles/partial-orders/
dboreham
That's a great reference which I hadn't seen before. Another thing to note is that it made more sense to me when I realized that (all) the states of a replicated state machine can be considered as points in a lattice. This I think was Lamport's insight. Once you make that connection then you can reason about replicated state machines in terms of lattices and posets, allowing for example proof of convergence or otherwise.
dboreham
That's a good question. I don't think I found any useful books, although surely there are some out there. I believe I mostly read Wikipedia articles and posts on math stackexchange

e.g.

https://en.wikipedia.org/wiki/Join_and_meet

https://math.stackexchange.com/questions/1775926/when-is-a-j...

jamesblonde
Great video. I met Leslie once, sat on the bus beside him on the way to a conference around 8 years ago. He wasn't the chattiest, but you bring up his work, he likes to talk. I think he was just over 70 years old, but still incredibly sharp. At the time Microsoft Research were shutting down their valley office, but they would still let him come in there - last one to put the lights out (metaphorically for computer science research at the big IT companies). Nowadays, he couldn't do the research work he did there and at other places at any big IT company - it's R&D, with the emphasis on "D".
IMTDb
> Nowadays, he couldn't do the research work he did there and at other places at any big IT company - it's R&D, with the emphasis on "D".

I, on the other hand, am amazed by the progress being done today, and the research paper released on an almost weekly basis by companies like Google, Microsoft and co. Just look at the language and image models that are released, the progress on computer vision etc.

I played with the playground of OpenAI GPT-3 and I am blown away at the result. It's not perfect, but it's orders of magnitude better than what we had easily access to just few month ago. I have started using it in my day-to-day life on some specific tasks, and I can't wait to see the next versions. I can't even start to fathom the amazing products people are going to build around it in the next decade.

It's absolutely true that these models/research serve the purpose of the company building them... but even Leslie says in the video that he did his entire career in the industry because that's where he saw many interesting challenges, and that his whole research was a means to an end.

R&D is still alive and kicking, and is going faster than ever. You might just be looking at the past with rose tainted glasses, and needlessly undermining the present.

vfclists
AI and ML are not true computer science because no one knows how these models work, not even the developers
mirker
People knew how AI worked when AI meant “graph search.” But now solving chess is not AI and linear (and nonlinear) regression is…
OkayPhysicist
The problem is the usage of the label 'AI'. There are basically 4 uses of the the label: 'General AI' (popular among layfolk), 'stuff computers can't do yet' (popular among developers), 'stuff we just figured out how to do' (popular among sales people), and '2nd order solutions' (popular among academics).
anthk
Mathematicians do.

https://en.wikipedia.org/wiki/Probabilistic_reasoning

https://en.wikipedia.org/wiki/Fuzzy_logic

ryanklee
Why does epistemic purity matter? The results are astounding.
avgcorrection
> I think he was just over 70 years old, but still incredibly sharp.

Nice...

I don’t think someone at his level becomes dull at such a young senior age.

penguin_booze
A friend used to joke that "we're doing R&D, too: run and debug".
peppertree
I believe VMWare "adopted" Microsoft's research team, but that's the last I heard of the team. These days the most interesting corporate research happens at Google, Nvidia, OpenAI. I guess the forefront of research has moved onto ML and many old school researchers got left behind.
metadat
There's lots of other research happening all over, but gets little attention probably due to non-existent or otherwise poor marketing beyond publishing papers.
InefficientRed
There is tons of formal methods research happening in industry. Way more than in the days of Microsoft research silicon valley
shadowfox
Out of curiosity, which companies do this sort of research?
mirker
AWS had a paper in SOSP on verifying parts of S3

https://assets.amazon.science/07/6c/81bfc2c243249a8b8b65cc21...

cjbgkagh
At MSFT the corporate politics killed it and some of the old school researchers were a big part of that. Chris Bishop applied the finishing blow. The best applied research work was shipped off to China so even if you have good ideas you won’t get budget to try it in the US or EU. It’s difficult to fix that, it’s easier just to leave.
rhplus
TLA+ and other formal language research is pursued by the RISE group in Microsoft Research:

https://www.microsoft.com/en-us/research/group/research-soft...

RcouF1uZ4gsC
I think one of the things that helped was his ability to come up with very catchy explanations and names. “Paxos” and “Byzantine Generals” have great memetic power verses some boring technical name.
User23
In my opinion Lamport is the greatest computing scientist since Dijkstra. His work on the win and sin predicate transformers is grossly neglected and under appreciated. Dijkstra was the first to formalize interrupts and nondeterminism, but Lamport took it to the next level.
carapace
https://lamport.azurewebsites.net/pubs/lamport-win.pdf
rramadass
Right, this is also my viewpoint! We need Scientists like them to beat the importance of Mathematical Thinking into our lazy brains. No wishy-washy compromises but just full focus on usage of Correct Logic.

Now the next step is for somebody who understands their writings to teach us unwashed masses :-(

it4rb
Coincidentally I'm also reading about Lamport signature (https://blog.cryptographyengineering.com/2018/04/07/hash-bas...).

It's really interesting because it only uses hash function and doesn't rely on trapdoor function! I think quite many people (me included) only learn about digital signature after public key cryptography, so it's quite a surprise to know about Lamport signature. Also, IIUC, because it doesn't rely on trapdoor function it also resistant against quantum computer in the future

user3939382
I once carefully read his bio and accomplishments and have felt like a failure ever since.
bombcar
Reflect that for untold many it’s “oh hey the LaTeX guy did some math”.
andrewstuart2
Pretty sure the untold masses actually think you're weird for oddly capitalizing the type of rubber used for condoms and fetish wear.
math-dev
Something related: https://www.quantamagazine.org/computing-expert-says-program...
SkyMarshal
Money quote: "When you write an algorithm, you need to have a proof that it's correct. An algorithm without a proof is conjecture, not a theorem."
newsoul
He says that computer science undergraduates should learn mathematics properly. It is maths that make someone a programmer from a coder.

I am not sure what he means by maths? Exactly which topics in maths?

npigrounet
None
Animats
He says that computer science undergraduates should learn mathematics properly. It is maths that make someone a programmer from a coder.

That's very Djykstra. Read Djykstra's "A Discipline of Programming" (1976). Programming is hard and programmers should suffer.

The trouble is, suffering doesn't scale. Academic computer science was a tiny field before the mid-1980s.

anthk
Again, read SCIP, the best env to do that it's to install Emacs and then, Guile and run:

Alt-x [package-install] (press intro) [geiser-guile]

Alt-x [package-install] sicp

Alt-x [geiser]

Alt-x info (press Intro) Ctrl-s SICP (press Intro).

(Press Ctrl-x b) to switch between the buffers, or just use the menu entry in Emacs.

triska
One of the most interesting results I found in Leslie Lamport's papers is Buridan's Principle:

A discrete decision based upon an input having a continuous range of values cannot be made within a bounded length of time.

Quoting from https://lamport.azurewebsites.net/pubs/buridan.pdf:

"The significance of Buridan’s Principle lies in its warning that decisions may, in rare circumstances, take much longer than expected. Before the problem was recognized by computer designers, some computer systems probably failed regularly (perhaps once or twice a week) because arbiters took longer than expected to reach a decision. Real accidents may occur because people cannot decide in time which of two alternative actions to take, even though either would prevent the accident. Although Buridan’s Principle implies that the possibility of such an accident cannot be eliminated, awareness of the problem could lead to methods for reducing its probability."

In the accompanying notes at https://lamport.azurewebsites.net/pubs/pubs.html, Lamport states:

The four reviews ranged from "This well-written paper is of major philosophical importance" to "This may be an elaborate joke." One of the other reviews was more mildly positive, and the fourth said simply "My feeling is that it is rather superficial." The paper was rejected.

lisper
Why is an analog-to-digital converters not a counterexample? For that matter, why is a comparator (a one-bit ADC) not a counterexample? Or, for that matter, a wire tied to ground? Or a 3x5 card with the word "YES" printed on it?
kazinator
I think it's because these devices do not always make the correct decision between two values.

Lamport's paper covers this in the interrupt example: a computer must decide whether an interrupt line is signaled or not in a finite amount of time (the next instruction). The interrupt is continuous: it passes through some halfway voltage level like 2.5 between 0 and 5, and so it can be called the wrong way.

Basically the whole result is a restatement of real numbers not being computable.

The discrete sample value you get from an ADC is not based on knowing the underlying real number absolutely, and then choosing the nearest value. It's a time limited process of discovering the value partially. Since the value is not partially known, it is not known whether the reported sample value is the closest one.

EGreg
I emailed back and forth with Leslie a lot about this, when I first read his paper. There is an earlier analysis of the same phenomenon by different people (the arbiter problem) which I discovered later.

At first, I had trouble to believe it, and tried to find counterexamples, until I realized that the composition of continuous functions is continuous. I still wonder sometimes whether an infinite composition of continuous functions has to be continuous (limit of a sequence) if it represents some real world evolving process. Can it produce a discontinuity in the limit?

Anyway, Buridan’s principle informs our consensus process in Intercoin Protocol[1] [2] — it is why the consensus groups are always of a bounded finite size, so we don’t approach the errors in Buridan’s priciple (I think this is also called unstable problems).

1. Beyond blockchains https://community.intercoin.org/t/intercoin-technology-conse...

2. Experimental improvements https://community.intercoin.org/t/experimental-improvements-...

Anyone who reads the above … I would be very interested in your feedback on our forum.

suture
Let f(x) = x+1. We have that f composed with itself n times is the function x+n. So for all x the repeated compositions of f with itself tends to infinity. Thus the infinite composition of continuous function may not even be a function.

You might be interested to read this though:

https://en.wikipedia.org/wiki/Infinite_compositions_of_analy...

raegis
> I still wonder sometimes whether an infinite composition of continuous functions has to be continuous (limit of a sequence) ...

No, as you expected. Take f(x) = x^2 on [0,1]. Then f(f(f(...f(x))) converges to g(x) = { 0, if 0 <= x < 1; 1, if x=1 }

EGreg
Right. So now the question is -- are you able to give a physical example of physical (continuous) processes which can create a discontinuous output from a continuous initial state, in a finite amount of time? I think this is a sort of Zeno's paradox...
juhanima
Tossing a coin?
EGreg
No. The coin could land on its side, for instance, and stay there.

Consider a pencil that is placed almost vertically, and then later drops to a horizontal position. There is a period where it is "unsure" where to go, and the more perfectly balanced it starts out, the longer this period is (assuming no wind etc.)

That's what is being discussed.

im3w1l
I think Buridan's ass is like an inverted pendulum. There is no particular reason why it should fall either way but the slightest gust of wind or vibration will trigger a self-reinforcing process that breaks the stalemate.

Now, an inverted pendulum can be stabilized with a control system, and this might be possible to relate with adversarial inputs of some system.

pontusrehula
> Real accidents may occur because people cannot decide in time which of two alternative actions to take, even though either would prevent the accident.

Reminds me of one of the Boeing 737 crashes where pilots were reading the manual but had not enough time to reach the relevant pages.

grayclhn
From skimming the paper, "continuity" seems to be a deceptively strong assumption here, and the claim "this is a physical system so it's continuous" is doing a lot of heavy lifting. Unless I'm misunderstanding the example in the first paragraph, "move towards point 1 at a constant pace" can't be represented as a continuous A_t(x) for t above some finite threshold.
thfuran
Why not? How are you getting discontinuities out of constant velocity motion?
grayclhn
Write it out :)
harerazer
Consider the following decision: If x < 0.5 go left at a constant speed till the hay is reached, else go right at a constant speed until the hay is reached.

Then A(t,x) is clearly not continuous in x, and we can easily bound the time required to make a decision. The nuance here is that we have to somehow be able to distinguish 0.5 - eps from 0.5 for very small epsilon.

Edit: on further thought, suppose we had a device which measured reasonably well. More precisely it tells us x < 0.5 if x is actually <= 0.5 - c, it tells us x >= 0.5 if x > 0.5 + c, and tells us it is unsure otherwise. We do not know c, but it is deterministic (and hopefully reasonably small). Then we can decide to go left if it tells us x < 0.5, and right if it tells us unsure or that x >= 0.5.

irishsultan
> The nuance here is that we have to somehow be able to distinguish 0.5 - eps from 0.5 for very small epsilon.

If you ignore the problem then the problem indeed goes away. The need for distinguishing very small epsilon exists because of the continuity assumption, and because of the continuity assumption you can't really solve it either.

> Then we can decide to go left if it tells us x < 0.5, and right if it tells us unsure or that x >= 0.5.

Now you just moved the problem to deciding at which point you are unsure. As long as there is a decision to take the issue persists, it's only if you always go left (or always go right) that the issue doesn't exist.

grayclhn
> As long as there is a decision to take the issue persists, it's only if you always go left (or always go right) that the issue doesn't exist.

That's not true. There are two problems.

1. The math is set up to rule out a lot of obvious solutions. Write out A_t(x) for the decision rule "always walk right at a constant rate." For any rate, A_t(x) becomes a spike at 1 for t > some threshold that depends on the rate.

2. The math rules out randomness. A_t(x) can't be defined for the decision rule, "flip a coin and walk right half the time, left the other half," because you wind up at 0 with probability 1/2 and 1 with probability 1/2 for t > some threshold and the problem is defined so A_t(x) must resolve to a single point.

Once you do that, the only solutions left are kind of mind fucks. But you can't draw any general conclusions from it.

zozbot234
> A discrete decision based upon an input having a continuous range of values cannot be made within a bounded length of time.

I understand that this is true for deterministic decisions. But a limited amount of randomness/noise is generally an acceptable alternative to a requirement for unbounded precision.

bsedlm
there're several things involved: continuity, time, measurement, degrees of freedom.

I have a sensation that maybe this has more to do with 2-dimensional problems, i.e. in 3D the problem vanishes (the difference between the ass/railroad/tree/alice-bob example and the balloon airplane/electron spin measurement)

I'm reminded of this video[1] in which a connection between uncertainty and fourier's transforms is explained. I suppose continuity plays a large role here.

[1] https://www.youtube.com/watch?v=MBnnXbOM5S4

l33t2328
What about a decision like “can this wave be faithfully represented by this set of points?”

Wouldn’t the Sampling Theorem give an answer in some bounded time? Is the idea that the time required to crank the algorithm can grow without bound?

triska
What do you mean with "this wave"? Sampling is itself the reduction of a continuous-time signal to a discrete-time signal, so the decision issue arises during sampling?
bsedlm
this does have a feeling that the answer may not be "symbolicaly" (perfectly accurately, or rather, booleanly? digitally?) decided quickly enough

however in practice it should still be reasonably approximated within the given time window... but how?

I wonder if there's a relationship between this principle and the halting problem?

gryn
the signal needs to have a maximal frequency for samling theorem to be aplicable. not all functions on a continious domain have that, in fact i think its the oposite most don't unless proved otherwise.
Kranar
I feel like Buridan's Principle is either poorly worded, or most likely I just completely fail to understand it. The principle says:

"A discrete decision based upon an input having a continuous range of values cannot be made within a bounded length of time."

But certainly I can make a discrete decision in a constant amount of time, just choose to always go left. I must therefore be missing something here.

The next possibility is that a discrete decision procedure that guarantees success is not possible in an unbounded amount of time. This is more sensible and I think the idea is that there will be some kind of infinite regress. For example a donkey will starve to death if it doesn't eat in exactly 100 seconds. The donkey can choose to walk left X meters, or walk right Y meters to some food.

There are certain values of X and Y where the donkey dies no matter what, if X and Y are sufficiently far away that the donkey can never get to them in 100 seconds then the donkey dies.

There are also certain values of X and Y where the donkey can pick either of them and survive since the donkey can get to X or Y in well less than 100 seconds, so the donkey lives.

The above two scenarios are boundary conditions of sorts, where it's fairly trivial to come to a decision, the principle is about what happens when there is a value of X where it takes almost exactly 100 seconds to get to X, (and assume way more than 100 seconds to get to Y), but in order for the donkey to come to that realization the donkey needs to spend some time thinking about it and by the time the donkey has made a decision the donkey won't have enough time to carry it out. So the donkey has to take into account not only the amount of time to get to X, but also the amount of time it takes to come to a decision to get to X, but that too takes some finite amount of time... so the donkey has to take time to come to a decision about how long it takes to come to a decision to get to X, so on so forth... and you end up with an unbounded amount of time.

I could be way off here but I feel there is some subtlety in the way this principle is described that I'm missing.

worik
My guess without looking it up....

> "A discrete decision based upon an input having a continuous range of values cannot be made within a bounded length of time."

"A discrete OPTIMAL decision...."

kevhito
The missing piece of "just choose to always go left" is that this is a degenerate and uninteresting case. No decision is being made.

The range must be discrete and at least 2 possible values.

There is nothing about optimality at all here. Even if both possible outputs are equally "optimal", there is no procedure to pick one in a finite amount of time.

Kranar
I doubt the issue has anything to do with being interesting or not. Do you know this for a fact or can give me anything to follow up on what it means for a decision to be interesting? For example if a decision procedure required the choice of chocolate or vanilla depending on the current time of day (a continuous input resulting in a discrete output), and I present an algorithm that categorically chooses chocolate regardless of what time it is, I doubt you'd turn around and claim that my algorithm is not making a decision because it always chooses chocolate.

If an algorithm takes an input, continuous or discrete, discards that input and returns a constant value, that algorithm is just as "legitimate" as any other algorithm in so far as being an algorithm is concerned. It may not produce the optimal output for some given cost, but it is a formal and rigorous decision procedure that manages to output a discrete value given a continuous input in a bounded amount of time.

At any rate, reading the comments when this was last posted it looks like the issue has to do with producing the optimal output in a given time frame. If an optimal discrete decision must be made in the next T seconds on the basis of a continuous input X, then there always exists some value of X that requires more than T seconds to compute. This will be true for any value of T regardless of how large.

andi999
Yes, always go left (except if you start at 1) is a solution. It is dismissed by Lamport though because the function is not continuous in 1. Somehow he believes not beiing continuous violates some principles of physics.
espadrine
That continuous assumption is well-justified in the paper, even for quantum physics, where indeed probability densities are continuous.

Looking at it macroscopically, highly complex decisionmaking such as the one that the brain produces, has continuity. In chaos theory, a double pendulum may, after a minute, be either on the left or the right depending on tiny changes in initial state; but changing the initial condition continuously, will change the state at the minute mark continuously, and so, the decision.

Looking at it microscopically, even CPUs are continuous, as there is a slim chance of a transistor only producing half the current, because of the quantum behavior of electrons.

It is conceptually weird that there must be a chance for permanently indecisive animals. To me, the true resolve of the paradox is superdeterminism: the choice taken had to happen.

andi999
So you are saying an algorithm which says always go left unless you start at 1, is physically impossible (apart from using superdeterminism), because the A function would be non-continuous?

I am glad that paper got rejected.

espadrine
> So you are saying an algorithm which says always go left unless you start at 1, is physically impossible

The algorithm itself is possible. If implemented in a continuous machine, though, it is not guaranteed to reach either point 0 or point 1 in any given timeframe.

In practice, it usually does not matter. One application I can think of, though, is the surprisingly powerful fault injection vulnerabilities which can break the physical implementation of a cryptosystem: http://euler.ecs.umass.edu/ece597/pdf/Fault-Injection-Attack...

mritun
> A discrete decision based upon an input having a continuous range of values cannot be made within a bounded length of time."

“Always go left” satisfies the first (a discrete decision) but not the second ([decision] is based upon an input having a continuous range). If the decision is preordained, it may be interesting but not relevant to the problem at hand.

emmelaich
Discussed at https://news.ycombinator.com/item?id=22884831
bsedlm
to be fair, the paper has a funny tone, like an elaborate joke would have... I feel like he may deliver a punch-line at any point before I finish reading the paper and then I'd have to choose whether to burst out laughing or not
froh
uh --- which implications does this have for driving AI? Possibly none, if the acceptance criterion for AI is: "it kills fewer people than humans would. possibly others, but fewer overall."

?

charcircuit
Driving AI doesn't require continuous input variables. Approximations are good enough in the real world.
kevhito
What do you mean? All the inputs are continuous: light sensors, LIDAR, infra-red, inputs from mechanical sensors from the driver. Sure, the sensor package's hardware/firmware/software converts these to discrete inputs before it reaches the Driving AI, but all of the Buridan's Paradox results still apply to those sensor packages. They can't perform their tasks in a finite amount of time -- either they will sometimes fail to make a decision at all, or they will render an invalid output (e.g. rather than outputting voltage corresponding to logical 0 or 1, they will go into a meta-stable or astable output mode that is not a valid output voltage).
im3w1l
I think the reason it works out is that we discretize the continuous signal. Now what if the continuous signal is very very close to the threshold between two discrete steps, in that case random noise and interference can push it over and make the sensor give a slightly wrong reading, but this error is small enough not to matter.
charcircuit
>rather than outputting voltage corresponding to logical 0 or 1, they will go into a meta-stable or astable output mode that is not a valid output voltage

It sounds like you could easily avoid this. You can use a microprocessor and lead in analog input on one pin and then set an output pin based off that. You will always output a valid voltage. Sensors have noise anyways so it's not a big deal if the output is slightly wrong.

Afton
None, unless you think that people are somehow able to side-step this problem in a way that a computer can't mimic.
kevhito
The paper specifically details several situations in which humans are the ones making the decision, and the result is the same. There is no bounded-time decision procedure that can take continuous (i.e. physical) inputs and render a discrete decision.
Animats
This is a very real phenomenon. But Lamport didn't discover it. It was known a decade earlier. The classic paper is Kinniment and Wood, 1976.[1] The earliest patent in this area is from 1972.[2] It's quite difficult to do this right. Arbiters today recognize "A was first", "B was first", and "ambiguous, must retry". The system has to be prepared for rare retry delays. In theory there can be an infinite number of retries, but the odds of needing more than one retry are very low.

It comes up all the time in multiport memories, where several CPUs are accessing the same memory, and there has to be an arbiter to decide who gets access access for near-simultaneous requests. Requests can be very close to simultaneous. If two CPUs with 3 GHz clocks are contending for memory access, and the clocks are not synchronized but close, about once per second they will be within 10^-19 seconds of whatever difference is most troublesome.

This was a serious problem with some early multiprocessor mainframes. Early in my career we saw this happening with multiprocessor UNIVAC 1108 machines. They had an unsound arbiter, and, once in a while, every few hours, they'd botch a memory access. Early on, the software stability was so bad the OS crashed more often than the hardware did, but as the OS got better, it became clear there was a race condition at the hardware level.

[1] http://async.org.uk/David.Kinniment/Research/papers/IEE1976....

[2] https://patents.google.com/patent/US3824409A/en

gradschool
Kinniment expands on the idea at length in [1]. Chaney is another good source on the history of arbiters [2]. Specialists in asynchronous circuit design think about arbiters a lot. Alex Yakovlev and Alain Martin have published some papers about them.

[1] http://www.async.org.uk/David.Kinniment/DJKinniment-He-Who-H...

[2] https://arl.wustl.edu/~jst/cse/260/glitchChaney.pdf

edit: typo

Animats
And, finally, here's a data sheet for a 74F786 4-bit asynchronous bus arbiter IC.[1] Here's the answer to the paradox of Buridan's ass[2], in a 16-pin dual inline package. It doesn't take free will.

"The 74F786 is designed so that contention between two or more request signals will not glitch or display a metastable condition. In this situation an increase in the BRn to BGn tPHL may be observed. A typical 74F786 has an h = 6.6ns, t = 0.41ns and To = 5µsec."

"If two or more BRn inputs are asserted at precisely the same time, one of them will be selected at random, and all BGn outputs will be held in the high state until the selection is made. This guarantees that an erroneous BGn will not be generated even though a metastable condition may occur internal to the device."

So, usually, you get a win within a specified time, but sometimes it takes longer. The limit on the longest time is probabilistic, but the odds of settling increase rapidly with time, by orders of magnitude per nanosecond.

In this part, we get to see a simple standalone arbiter with its own data sheet. A logic diagram is given, so you can see how this is built out of simple gates. Note that diagram can't be read as abstract logic; propagation delay matters. While the arbiter is in a metastable state, an inhibit signal is generated which prevents any output from appearing. Once the arbiter has settled, the winning output is allowed out. The gate thresholds matter. The inhibit signal doesn't turn off until there's only one clear winner.

It's an old part, from 1991, because today this function is usually part of a larger function such as a CPU chip or a DRAM interface.

[1] https://www.digikey.com/en/htmldatasheets/production/96092/0...

[2] https://en.wikipedia.org/wiki/Buridan%27s_ass

May 21, 2022 · 4 points, 0 comments · submitted by pagade
May 20, 2022 · 2 points, 1 comments · submitted by Tomte
mindcrime
Spoiler: the man in question is Leslie Lamport[1].

[1]: https://en.wikipedia.org/wiki/Leslie_Lamport

May 17, 2022 · 3 points, 0 comments · submitted by marban
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.