Hacker News Comments on
Robust-first computing: Beyond efficiency
Dave Ackley
·
Youtube
·
3
HN comments
- This course is unranked · view top recommended courses
Hacker News Stories and Comments
All the comments and stories posted to Hacker News that reference this video.You might be interested in 'Robust First Computing' - https://www.youtube.com/watch?v=6CNg1Q3RNWIedit: here is a far better explanation by the terms creator:https://www.youtube.com/watch?v=7hwO8Q_TyCA
I am happy you don't agree on modularity. I don't want to be correct, I want to arrive at correct conclusions. :)Composition is great, scale free self similarity is probably the basis for the universe.
Modularity is a great design technique, it can also make things weaker and force other (unknowable) design choices because the module boundary prevents the flow of information/force. Overly constrained modular systems encourage globals, under constrained modular systems are asymptotic to mud/clay.
I don't want to use K8S as a strawman to attack modularity, but I think it is an example of using this powerful design tool to solve the wrong problem using mis-applied methods all the while being more complex and using more resources. In the case of designing systems, modules/objects/processes (Erlang sense) are critical, but not so much in building/engineering them. Demodularizing or fusing a design can make it more robust and more efficient.
I don't dislike modularity, I just think it is a bigger more complex topic than most give it credit for. Unix is highly-non modular and very poor composition. It sits on a molehill of a local maximum, itself sitting in the bottom of a Caldera, a sort of Wizard Mt on Wizard Island.
Other things you might like is the research around "Collapsing Towers of Interpreters" [1]
Or Dave Ackley's T2 Tile Project and Robust First Computing [2]
Would love to chat more, but internet access is spotty for the next week, non-replies are not ignores.
[1] https://lobste.rs/s/yj31ty/collapsing_towers_interpreters
[2] https://www.youtube.com/watch?v=7hwO8Q_TyCA https://www.youtube.com/watch?v=Z5RUVyPKkUg
then you'd debug it, or program debuggers to flit around looking for programmer error then address itthe system is clear about its capabilities regarding 'correctness'
which seems to imply the programmer's job is to optimise with those built in inaccuracies
but what those inaccuracies afford is what ackley is calling robustness and indefinite scalability
ackley addresses your question directly with a sorting comparison(o) from a later video
the graph fails to exhibit where maxwell's demon(i) horde sort would express itself on the graph, but sorting corruption is addressed in the paper(ii)
unintended performance appears to be the reason for the advent of this systemThe demon horde sort’s performance may be just adequate, by that measure, but its robustness seems quite impressive. Figure 23 shows results of one experiment in which we randomly corrupted site memory with simulated bit errors at a range of probabilities. Each error occurrence selects a random site and then flips from one to eight of its 64 atomic bits. We can see that while channel length helps performance, it does not help robustness against this system perturbation— but the system is strikingly robust anyway, tolerating upward of 10 multibit corruptions per million events with essentially no visible performance degradation, regardless of channel length. Above about 50 errors/Mevent the system reliably falls apart—and the pathology appears to run a reliable course..
but going further as to deal with unintended performance of hardware
if you have a multi core system running an incorrectly programmed sort and one core fails the whole thing shuts down, but an incorrectly programmed sorter in the demon horde will keep functioning even with failed cores, affording the opportunity to adjust while performing
(o) https://youtu.be/7hwO8Q_TyCA?t=688
(i) https://en.wikipedia.org/wiki/Maxwell%27s_demon
(ii) http://comjnl.oxfordjournals.org/content/56/12/1450.full.pdf...