HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Jonathan Blow - Preventing the Collapse of Civilization (English only)

bus · Youtube · 393 HN points · 67 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention bus's video "Jonathan Blow - Preventing the Collapse of Civilization (English only)".
Youtube Summary
Jonathan's talk from DevGAMM 2019.
https://www.youtube.com/c/DevGAMMchannel
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
> limits to the creativity of people

Yes, therefore limits. At least in the short and medium term (which could be 1000s of years). Look at the technological advancement of Roman empire, lost for a millennium before human ingenuity advanced past that level again.

Good video from Jonathan Blow on rise of complexity leading to collapse and lost knowledge [0]. Add in energy and resource constraints (for the given world popuplation numbers) and we may be in for a bad time for a while in the near future.

[0] https://youtu.be/pW-SOdj4Kkk

I don't think inefficiency is his only point.

If it could be written with all the inefficiencies and deliver value faster without bugs, that's one thing. But the problem is, software is consistently buggy, and you run into them every day. It's now hard to notice because it happens so often, we're inured. That's why he challenges you to write down all the bugs you run into everyday, to make yourself more aware. You can see his list here. https://youtu.be/pW-SOdj4Kkk?t=1347

I've also done this exercise, and it's surprising how many you notice when you're looking for it and write it down.

Modern software is terrible in the sense that it's still buggy and hard to use despite access to immense power of modern computers--not merely that it's inefficient.

kelnos
Sure, but... so what? The market has -- unfortunately -- shown us that it's better to put an inefficient, perpetually-buggy product in front of users, than it is to wait until you've micro-optimized everything and fixed most of the bugs.

The exact same argument we can use to describe why inefficient software tends to win can be used to describe why buggy software tends to win.

Of course users hate bugs. But even more they hate not having software to solve their problems. Hands down I would choose to have something with bugs today, than wait for something bug-free 6 or 12 months from now.

And unfortunately, once you put something with bugs in front of users, you're going to be pushed to work on new features for the next version, rather than fixing the bugs of the previous version. But arguably that's irrelevant! Users would rather have those future new features in 3 months, over having all their bugs fixed in 6 months, and then getting those new features in 12 months.

It still feels incredibly annoying and lame to me that this is the state of things. And I think that's why people like Jonathan Blow (and many, many others) just don't get it. Working software in users' hands trumps everything else. It almost doesn't matter how inefficient or buggy or $OTHER_NEGATIVE_TRAIT it is. If it solves unsolved problems, then it wins.

[Yes, I know, there are limits to what users will tolerate. But most inefficient, buggy software doesn't hit those limits too often. Software that does... well, yep, they tend to fail. But they're quickly replaced by other inefficient, buggy software that users will tolerate. They're not replaced by super-efficient, nearly-bug-free software, because building that software will always take much longer than building the software that users will tolerate. And users want to get shit done, not wait around while you tell them how much better the efficient, nearly-bug-free software will be.

iamwil
I agree that one major contributor to these problems that he doesn't (fully) address in the talk are the unit economics and market forces that help along these decisions.

He does allude to it about faster software (slow being equated with buggy), and I think there are market indications that for specific markets, fast (efficient software) does win--though it is the exception seemingly, rather than the rule. The rule seems to be quick-to-market software wins.

However, I think his point has a wider scope, and I don't think it's irrelevant. Yes, in the short term, market forces push us to put software in front of people quickly, even if it's buggy. But is that beneficial in the long term? What would be the consequences of making short term optimal choices here?

Would we (as a society) forget how to build the things we rely upon in our society because the complexity is too overwhelming, and at every step, we never went back to clean things up and focus on transferring inter-generational knowledge, because we were too busy putting out features?

Just like in the Innovator's Dilemma, the incumbent at every step makes the optimal decision, but ends up getting trounced because a series of local optimized choices is not a globally optimized choice. And Blow here is saying (strongly), that we're doing something similar with how we build software.

And I'm inclined to agree with him, even though I fully recognize the market forces as being strong. Would we keep making short-term optimal choices because we have no choice? Or is there a way out?

Alan Kay has talked about this in some of his talks. He has some neat ideas, wrapped up in unintelligible slides. I've always thought he needs a graphic designer for his slides. Anyway, his thesis is that we build software today much like how Egyptians built buildings: stacking things wide in order to build high--evident by code bases millions of lines long. We have very few equivalent of arches in software. Once we had arches, we could build higher with less materials.

So perhaps there's a way out. We've been searching for it for a long time (in software years, but we're still a young field), ever since before the Mythical Man Month. And Blow, I think, is saying, there are real consequences to this software mess we find ourselves in, the worst of which is the collapse of civ itself, esp if we don't find a way to reign in the complexity and the bugginess.

Or we can pray for AGI to get done and do it all for us. finger guns

tialaramex
Jonathan does indeed complain about bugs, but as is typical for this type of programmer he's sure these are somehow being introduced by "Bad" programmers who are incompetent or lazy, not by "Good" programmers like Jonathan.

You get stuff like: "The Jai philosophy is, if you don’t want idiots writing bad code for your project, then don’t hire any idiots." which probably feels good to say to yourself, but of course it's not going to result in fewer bugs.

Jonathan doesn't like Exceptions, which, fine, I agree control flow and error reporting shouldn't be bundled together, but if you're serious about preventing bugs and you don't want exceptions, you need to actually write a lot of error handling, which means error handling needs to be ergonomic in your language. In Jai there might be error flags, you might be expected to check them but if you don't it won't complain because you undoubtedly know best. Pressing on regardless becomes ergonomic like in C.

And so Jai certainly isn't going to do anything about how buggy software is.

imran0
> Jai certainly isn't going to do anything about how buggy software is.

Nothing is going to do that, except the competence of the developer.

tialaramex
That's certainly Jonathan's contention. I would argue that it's already obviously wrong.
The original was longer, and had more context.[1] His example of the loss of general knowledge about writing video drivers was a particularly good one, in my opinion. We're building far more complex systems, we need tools that offer a better impedance match to the people who build them. Working with source code is like hand editing schematics.

  1 - https://www.youtube.com/watch?v=pW-SOdj4Kkk
This is a good talk by Jonathan Blow on/against software complexity https://www.youtube.com/watch?v=pW-SOdj4Kkk

Also talks about the risk of losing knowledge, and a way to avoid that (make multiple copies).

neoneye2
Great talk indeed. Just watched it. Thanks.
Jun 08, 2022 · rglover on The End of Localhost
No. This is the absolute wrong direction.

In effect, you're giving a third-party absolute control over your code and your ability to work on it. You're also disconnecting the developer's mental model of how all of this stuff works (arguably why there's so much bad code floating around). Eventually, that will devolve to people not knowing how any of this works and the industry will collapse.

It's an abstract line of argumentation related to this, but Jonathan Blow nailed it on this idea a few years back: https://www.youtube.com/watch?v=pW-SOdj4Kkk

HideousKojima
Basically the same idea as this article about how current college students in CS programs don't really have any conception of what a filesystem is: https://www.theverge.com/22684730/students-file-folder-direc...

At least Android and iOS taking over the consumer computing world has given me job security until I retire!

IHLayman
Thank you for posting this. I know that is expressed most succinctly with an upvote, but I wanted to reiterate, because you were able to express something I wasn’t able to quite put my finger on but have felt for a long time. I was hoping that the feeling that many new coders didn’t know what they were doing could be attributed to me just telling them to get off my virtual lawn; I can see that my fear is possibly justified.

The video is great too; I hadn’t seen that before, and it gives some examples of what happens when that generational transfer of knowledge is not carried out, not just within a discipline but also across a civilization.

rglover
> I know that is expressed most succinctly with an upvote

"Brevity is for the weak." - Maciej Cegłowski

MikeDelta
One argument that I hear is that all these abstractions allow the developer to focus on delivering products and not worrying about the nuts and bolts of the underlying system, but in my experience the best and most efficient developers have been the ones who know how the systems work.
rco8786
Those things are not mutually exclusive though, at all
MikeDelta
They are not indeed, just sharing my anecdata that the best ones I have come across happen to know much more about the details.

I am sure one can find good and efficient devs that know very little about the underlying system and are very capable of using the available abstractions and frameworks. I just happen to find the first group better.

WorldPeas
I'd do it if said platform was FOSS and self-hosted, but until then, I'm staying in fort localhost
c0balt
Same here, maybe GitLabs "web IDE" will one day support this by, e.g., pairing a vm or container to each session, and adding extensibility. Though given the massive monster that GitLab is already it might take some time (and $).
dx034
Code-Server can be self-hosted for free, so that would fit your requirements.
1shooner
Gitpod would meet those requirements:

https://www.gitpod.io/blog/opensource

numpad0
Article claims the practice is adopted by Google, Facebook, Tesla, Palantir, Shopify, GitHub. That tells a lot!
namaria
Yeah no wonder big tech wants developers to depend on them.
dx034
I just spin up a VM at Hetzner (or another low cost cloud provider) and code on that. I don't give up any control but can still use a much more standardized environment and easier switch between machines.
smoochy
I was reading your comment and thought of Jonathan Blow's talk. Without even clicking the link, let me guess, that's from Moscow's 2019 conference?

EDIT: yes, it is. He has a very important point there. I recently went on Twitch to see what young devs where coding. One (rather smart, I must say) young lady was creating a simple sign in/sign up page, but there's a twist: with React. Upon me asking, why wouldn't she just code it in plain HTML, she responded with a question: "but how would it connect to the API?". So there you go, ladies and gentlemen. I don't really know what to do about it, but I don't believe we're a bunch of old men yelling at a cloud (quite literally - AT A CLOUD), especially that I don't think we're that old.

chrisweekly
Yeah; knowing about the HTML form tag apparently qualifies you as a senior web dev.
forty
Not knowing about it can also qualify you as a very senior dev (though not very up to date with recent improvements) as the form tag was apparently added to the (then informal) HTML format around 1994 ;)
anitil
All tags are divs these days.
onion2k
Considering how many web developers get that particular tag wrong I'm not sure that's actually as unreasonable as it first appears. The number of forms out there on the web that are really just used as a way to group collections of input elements, with no consideration for an action, a method, browser native validation rules, fieldsets, a legend, etc makes me wonder if people actually know HTML at all. Every React, Vue, etc form I look at the source for gathers up input into state and then submits it with a fetch, replicating, but also often breaking, the accessibility and functionality built into the browser.

So yes, maybe being able to implement a form tag in HTML is the mark of a senior dev.

corrral
I'm very much on the "this shit's too complicated, we should fix whatever's making us not use the standard features and tools instead of piling this garbage on top" train, but I think a valid defense of doing things in Javascript that could be done in HTML is that, as soon as you need to do any e.g. form validation in JS, it may be simpler to just do all of it there.

The accessibility issue is, of course, a solid counter-argument to running too far with that line of reasoning, and is part of why I'd much rather we put 1/10 the effort we do to constantly re-implementing basic HTML features into unfucking HTML standardization so we can finally have elements good-enough that we don't need to pile JS on top to get what ought to be built-in functionality. But, that's a whole different skillset from programming, and requires far more organization than thousands of devs all independently, or in small teams, working on yet another NIH version of an image upload input. :-/

NoGravitas
We're finally getting lots of things (HTML native validation, date and time pickers, etc.) that make JavaScript unnecessary for the things it was commonly used for in the 2000s. Unfortunately, in the mean time, client side scripting has metastasized throughout web applications to the point that things that browsers and HTML actually do implement well (history, keeping your place in the page, accessibility) are generally re-implemented, poorly, by web applications.
anyfoo
I see web development is still trying to make something fundamentally ill-suited for its task work by stacking shoddy layer after shoddy layer on top of it, just like it did when I left web development 10 years ago and went back to system programming.

System programming also has its warts, but at least most of the time you don't feel like you're working against the machine.

My71staccount
None
com2kid
The funny bit being react uses regular browser APIs for network requests! Nothing was even gained, just call fetch!
kevlened
True! Though the funny bit is you don't need an API at all (at least not a json-driven one). A form post is even simpler.
bryanrasmussen
What if the API she wants to call doesn't have a form based version?

Suddenly she needs to change stuff on both backend and frontend, and she might only be allowed to touch one of those ends.

mwcampbell
I think the best solution to this is to eliminate the division of labor between back-end and front-end, for the majority of applications that don't actually require deep specialization in either. As DHH put it, integrated systems for integrated programmers [1].

[1]: https://m.signalvnoise.com/integrated-systems-for-integrated...

aliswe
I've met such (male) frontend developers before, totally clueless when it comes to the DOM and how it works. Not putting any judgment in that really, tbh, because the React development model is superior for anything non-trivial (like a signin form).

In this case though, I believe she isn't confused as to how to call the APIs, maybe not even as to how to handle the response with JS and update the DOM. I think the question sounded more like you're not allowed to use Javascript at all.

ericmcer
I started in 2008 and I would be hopeless at certain things that I am sure were essential 20 years prior.

Thinking React is essential to make an API call is a bit scary, but a future where we just click a few buttons and connect to a hot reloading syncing dev env with all the dependencies and transpiles obfuscated, seem inevitable, and it will be so reliable that we forget or never even learn how it works under the hood.

shkkmo
I think this is a question about becoming dependant on abstraction rather than where the abstraction is run.

I don't think that being "in the cloud" makes abstraction that much more pronounced. You can have manually configure dev VMs in the cloud or automatically configured dev VMs locally, the later will probably use more abstraction.

rglover
> I don't really know what to do about it, but I don't believe we're a bunch of old men yelling at a cloud (quite literally - AT A CLOUD), especially that I don't think we're that old.

I'm working on it: https://github.com/cheatcode/joystick.

The tl;dr is that I'm offering a full-stack framework that takes advantage of vanilla JavaScript and HTML in a front-end framework combined with a plain ol' HTTP back-end using Node.js. The APIs abstract just enough to make you productive but not so much that you can't reason about how it'd work without the framework.

The long shot of the project being to keep the mental model of "how the web works" intact while reducing the burden of doing everything from scratch.

c0balt
Your project looks interesting. Do you plan Typescript support with stubs vor smth similar?
rglover
Thanks. If I do, it won't be supported beyond the compiler/build tool (no official recommendation of using it w/ limited support).

It's an unpopular opinion, but I view TypeScript in the same light as I do the OPs assertion about cloud-only development. It's adding yet-another-layer that has some merits but often leads to overcomplicated messes that reduce productivity/add confusion.

I'd rather petition for some sort of structs/arg typing to be included in ECMAScript proper (in a similar fashion as to what happened with a lot of Jon Resig's jQuery DOM selection APIs making their way into ES6).

heavyset_go
As to you last point, there are proposals to bring typing to ECMAScript right now.
heavyset_go
Your*
rglover
Yup
rolisz
I had a similar experience with my sister in law who is getting her programming degree this month. When I asked her what was her final project and some details about it, she told me the entry point to her frontend was Angular. When asked how it was served and how it got to the browser, all i got was confused looks. When I showed her that the first thing that gets to the browser is an HTML file that then loads Angular and her application, her mind was blown.
fmakunbound
Man that is really sad, but I suspect these code in the cloud schemes will be targeted toward hordes of developers with such a superficial grasp on what’s happening.
xwolfi
That may say more about her than CS in general. I remember my very first encounter with Angular I made my seniors investigate how the hell double way binding could even work that way to have them confess to me that yes, we had 40k repeating functions running every n seconds to check variable states... for a static website whose double binding was just a first render time convenience. No wonder our mexican clients with shit phones couldnt load the site as well as our iphone-heavy Hong Kong ones. And it was my very first day on Angular.

How can she not be curious, Angular is so wrong and magical, it blows the mind to understand how it builds all that magic. Maybe your sister in law is the kind of persons who laughs at magic tricks ? I d be raging mad myself, I prefer to avoid them or Ill just be obsessed until I understand them

flukus
I like an SPA pile on as much as the next person, but I've seen similar ignorance from asp.net webform developers many years ago. Some of those were even "experienced" developers.
NoGravitas
To be fair, ASP.NET Web Forms obfuscated the relationship between your code and the HTTP request-response cycle almost as badly as an SPA framework.
rco8786
I had a similar experience with a family member. When I asked them how their keyboard strokes appeared on the screen, all I got was confused looks. When I showed them that the first thing that happens is a hardware interrupt, her mind was blown.

Seriously, it's okay that people don't know the ins and outs of every little detail. Abstractions are useful, until they're not. And then when they're not, you go figure out the level below.

Even the most advanced, most skilled front-end devs spend 99.99% of their time not caring that the browser loads an HTML page.

nunez
Not knowing how Angular bootstraps is how you get super huge SPAs that take forever to load on anything that is not a gigabit connection. Technical details matter.
rco8786
Genuinely do not follow this logic
Mordisquitos
I think your analogy would be valid if the previous commenter had, for example, asked their sister-in-law how the browser interprets the JavaScript it receives, or how it builds the display from the DOM. Now that would be asking about irrelevant and unnecessary levels of detail which are best abstracted away. However, knowing that the process starts with the browser receiving HTML over HTTP is essential to understand what's going on, even if you know nothing about the details.

If I may, a truly similar experience regarding keyboards would be this:

I had a similar experience with a coworker. When I asked him how the code that he types appears on the IDE on his screen and not on that of any of our colleagues' screens, all I got was confused looks. When I showed him that his keyboard is wirelessly connected only to his computer, which runs the IDE, and his computer is then specifically connected to his own monitor via a cable, his mind was blown.

xwolfi
And one eventually figures it out anyway after one too many blank pages on an angular app :D
rco8786
It was just a random example, you can literally pick any level in any number of abstraction stacks that exist and make the same analogy.

Sometimes it becomes necessary to know how the level below works. That doesn’t mean it’s a requirement to be affective at the level above.

cmroanirgo
Agree.

I would add that even good old, rock solid assembler isn't a straight entry point into controlling the CPU, but is now just an abstraction over µOPs. Abstractions surround us these days, and for each of us, we find differing levels of what's tolerable vs what should be known about the next layer below.

I'm personally (kinda) ok with not knowing too much about the sub-assembler processes that happen inside the literal black box called a CPU, but I know that for branch optimisation, it's important.

Guvante
I would categorize things as "does someone on the team need to know this":

* Why does a key appear on the screen when I push a button on my keyboard: no

* How is an HTTPS connection created: generally no

* How is the JavaScript library deployed: 100% yes

You might be able to throw something together that works without understanding that it is a JavaScript library with a bootstrap HTML page but if no one on your team understand that you will eventually need to find someone that does to solve a problem.

Kalium
> * How is an HTTPS connection created: generally no

I would think that people doing web development probably benefit from a working knowledge of DNS, TLS, and PKI. Without those, I would expect a lot of readily avoidable problems with HTTPS.

In general I advocate that software engineers should have a functional, if abstract, understanding of how computers work on various levels. They might not need a detailed understanding, but people often benefit in unexpected ways from understanding the systems they work with.

Spivak
In 2022 that person is ops not dev. None of my devs have any knowledge [1] about TLS, DNS, or even TCP. The only interaction devs have with the messy outside world is Rack/WSGI, their DB ORM, their Queue/Job abstraction, and the AWS client libs.

[1] Not like they literally don’t know but that their code has no interaction with it.

10000truths
That veil of abstraction gets pierced very quickly the moment you need to debug an issue or regression in your application. I'm working on an internal application that's using Django in my current job, and there are plenty of instances where we've had to run an EXPLAIN on the generated MySQL query to identify performance bottlenecks.
Calavar
> That doesn’t mean it’s a requirement to be affective at the level above.

Not necessarily, but it did have a material effect in the original example. A web developer drug in a complex, unnecessary framework because she didn't realize that HTML forms have the ability to submit post and get data natively.

To me that's like a civil engineer building a suspension bridge over a tiny creek because they never heard of an arch design.

rco8786
The web developer was completing a course specifically about Angular, not about how browsers work.
joshmanders
> A web developer drug in a complex, unnecessary framework because she didn't realize that HTML forms have the ability to submit post and get data natively.

You're making a massive assumption that she didn't already need to use React and may have been confused at how she would make the two systems work together correctly.

smoochy
Let me settle this debate: as I originally stated, the young developer was not stupid at all. She knew about HTML and forms and everything. She just didn't know how to work with it and, more importantly, she didn't want to get into that at the time, explaining that she didn't "want the site to look ugly" and that it was "a project for the portfolio" (implying that she needed to demonstrate her React skills).

So that's what's really troubled me. The companies responsible for producing this piece of garbage are actually getting into the heads of the younger generation, rendering them helpless without said garbage. I repeat, this is indeed garbage, especially React: in early 2000s we used to laugh at people mixing JavaScript and HTML, but some of them, apparently, decided it was their time to strike back.

The more important issue is, of course, that we have a generation of young people working with these "tools" which isolate them from learning things that actually matter. These levels of abstraction DO NOT add any value. These are "cargo cults" of abstractions, which without their authors knowledge (because those who invented them weren't very smart anyway) serve the purpose of keeping potentially talented and intelligent people ignorant and average. This is probably good news for somebody out there, but certainly not for us as a society.

joshmanders
> serve the purpose of keeping potentially talented and intelligent people ignorant and average

Man I think you're thinking wayyyyy to deep into stuff.

Do western artists and corporations borrow stuff from other cultures and competitors? All 20th century rock & roll, most of hollywood, and pretty much all of pre-90s Silicon Valley is based on that premise.

That you think it's theft is a debatable/controversial point of view on Internet forums, but if that is to be the case, many more people/corporations from USA should feel threatened, not just a few chinese scapegoats which help avoid the elephant in the room: why would anyone own ideas in the first place? Ideas are born out of other ideas and every one benefits from that. Restricting knowledge sharing can lead to disastrous outcomes as Jonathan Blow brilliantly argued in a talk called Preventing the collapse of Civilization which appears to pop up on HN every so often: https://www.youtube.com/watch?v=pW-SOdj4Kkk

jacknews
Um, no, there's a very clear difference between 'borrowing' ideas, building on other people's achievements etc, and outright theft, when you copy someone's detailed designs wholesale, especially from secret proprietary plans obtained through illegal espionage.
southerntofu
This "very clear difference" is the center of many trials in Hollywood / Silicon Valley history so i wouldn't say it's that clearcut. I personally don't see copyright in any way as a mechanism to bring retribution to the creative minds (instead it serves to capture value into big corps and let the artists starve), but as long as we have to deal with it i'll keep publishing stuff as copyleft so that the capitalist vampires think twice before "borrowing" my code.
You might like Jonathan Blow's talk "Preventing the Collapse of Civilization": https://www.youtube.com/watch?v=pW-SOdj4Kkk

Also, is your name a reference to the Mars trilogy? Reading that now (:

areoform
Yes, it is! I even had the chance to talk to Kim Stanley Robinson about it :)
aspenmayer
Did this exchange happen on HN, by chance? Color me curious.
Jonathan Blow makes a bunch of great points about our simplified view of ancient people, versus the reality of their complex achievements which could only be brought about by lots of ingenuity and iterations.

https://youtu.be/pW-SOdj4Kkk

bcrosby95
The interconnectedness of the bronze age was amazing. To make bronze required combining metals sourced from thousands of miles away from eachother.

In comparison, iron could be made basically anywhere.

codethief
Even for reasons other than Rome's history, this is one of the best talks I've ever seen. IMO every software engineer should watch it and I recommend it to colleagues all the time.
WalterBright
The lack of writing probably impaired things greatly. I know the Romans had writing. But did they ever write manuals? It was not like Gutenberg's printing press, which put progress on a steep upward trend.
pyuser583
For professions that requires formal education, certainly. There were manuals/“handbooks” for doctors and philosophers.

Plinys “Natural History” contained detailed geographic information, but was considered a work of history (surprise, surprise).

yesenadam
Not sure what you have in mind with "manuals", but I believe there were treatises on every subject, e.g. Vitruvius' On architecture (~20BC), "a guide for building projects. As the only treatise on architecture to survive from antiquity, it has been regarded since the Renaissance as the first book on architectural theory, as well as a major source on the canon of classical architecture. It contains a variety of information on Greek and Roman buildings, as well as prescriptions for the planning and design of military camps, cities, and structures both large (aqueducts, buildings, baths, harbours) and small (machines, measuring devices, instruments)"

https://en.wikipedia.org/wiki/De_architectura

Ptolemy (100-170AD) wrote treatises on many subjects, and his works on astronomy and geography "never ceased to be copied or commented upon, both in Late Antiquity and in the Middle Ages" https://en.wikipedia.org/wiki/Ptolemy

Vegetius wrote a military manual On military matters, extremely influential until about 1500.

https://en.wikipedia.org/wiki/Vegetius

Apicius wrote On the Subject of Cooking, a cook book.

https://en.wikipedia.org/wiki/Apicius

(So much ancient writing hasn't survived, e.g. none of Aristotle's many writings survived - what we know as his writings are just his students' lecture notes.)

WalterBright
The fact that any books would have to be copied by hand would disastrously limit their spread and influence. How many people could learn calculus if only 5 copies of the textbook were created?

So many being lost is an inevitable consequence of very very few copies ever existing.

thoughtsimple
He started off well with the history of civilization collapse but then he got the heart of his presentation with the misguided idea that modern software is more broken recently. He showed some incomprehensible bugs in Visual Studio.

All I have to say is that on Windows the Blue Screen of Death was a prominent problem for many years. On the original MacOS that was non-preemptive multitasking and no memory hardware protection, any program could cause a denial of service or take down the whole computer. These are 30 year old technologies that were clearly worse in many ways than modern equivalents.

Has software improved recently? No, I would say it has gotten somewhat worse recently but if you look at where we are today versus say 1998, things are clearly better. It isn't really possible to know if we are in a major decline or a minor local minimum.

ncmncm
Microsoft is not a legitimate standard of comparison.

Their business model was (and to a degree still is) predicated on getting people used to everything having bugs, and not complaining or, more importantly, returning it for a refund, angrily. They succeeded beyond their wildest dreams, and now everything else works almost as badly.

hattmall
The BSOD is kind of actually the opposite in reality. The software is worse, the hardware is better, so we are running more software to containerize those bugs. Software is more buggy and bloated than in the past it's just that the failures are silent and we don't have to reboot the entire system we just reboot pieces.
ido
win 9x was absolutely worse and a lot less stable than the NT lineage (that we're currently in). classic macos was absolutely worse and less stable than the NeXTSTEP lineage we're currently in. I used linux for the first time in 1998 with Red Hat 5.2 and it was an absolute pain to get working/install and a lot of hardware was unsupported. At least it didn't crash as much as windows 9x!
xmprt
I think software will never feel like it's improved because as technology improves and changes, new features will be expected of software and so while some parts of software improve, other new parts will bring down the average quality.

It reminds me of Shepard Tones which sound like they are infinitely increasing in pitch which is impossible because the human ear can only hear certain frequencies. (https://www.youtube.com/watch?v=BzNzgsAE4F0)

> cognitive overhead of getting a simple app up and running gradually took all the fun out of the job for me

This video was recently linked in other thread on HN, but I feel this comment really resonates with the main conclusion: https://www.youtube.com/watch?v=pW-SOdj4Kkk

I get what you are saying. I recently watched this presentation from DevGAMM 2019: https://www.youtube.com/watch?v=pW-SOdj4Kkk. The TLDR is that software is experiencing reverse evolution such that we are regressing in terms of most metrics that we care about (user experience, performance, reliability, etc), most of which I agree with. If what you are trying to build is a low-level portable system utility, then sure Electron is the "wrong" choice. But this feels a lot like the argument around low vs high level languages. Sure, coding in assembly has the potential to be more efficient, but it does not make sense in every (most?) cases. There are valid use case for high-level languages.

For open source projects in particular it seems like having a tech-stack that many people are familiar with and like to code, is easy to "hack", and publish to multiple platforms in is a clear benefit.

Edit: And, yes, the performance for the end user suffers because of these concessions to the developer. But in my experience with open source projects, these trade-offs can make the difference between having a viable application and not. I would love efficient, optimized, native apps, but given the choice between an Electron app and nothing, I would rather have the Electron app.

Jonathan Blow's talk 'Preventing the Collapse of Civilization' goes into detail about how technology can regress, with the Mechanism being one example. This kind of thing is far more common than we think, most would be surprised to learn that Ancient Greece had writing for about 600 years before forgetting it. There was no writing in Greece for over 400 years, until they adopted the Phoenician alphabet around 730 BC.

He compares this situation to the state of software development today. It's a sobering watch.

https://youtu.be/pW-SOdj4Kkk

ceejayoz
One of the interesting possibilities for the Fermi Paradox is the fact that we've wiped out the readily accessible deposits of iron, coal, oil, etc. A second go at the industrial revolution would be much harder, if we regressed that far.
crispyambulance
I think it might be helpful to consider what happened to the dinosaurs. The same could happen to us if we fail to evolve in time...

Asteroid hits. Wipes out almost all life. A million years later, the biomass around us will have all been converted to a black ooze (oil), covered by millennia of rock, sediment, and tectonic plates. Eventually future civilized beings who plunder the Earth for our biomass that has been converted to oil and coal discover uncanny hard-to-explain remnants of a past civilization of, get this, bipedal animals.

jrochkind1
Are you suggesting if the dinosaurs had tried harder to "evolve in time", they could have better survived an asteroid hit? I don't think that's how evolution works. For humans either.
crispyambulance
> Are you suggesting if the dinosaurs had tried harder to "evolve in time", they could have better survived an asteroid hit?

Well, yes. It's not fair to the dinosaurs, I admit. They hardly had a chance to develop language and mathematics. They were still too busy ripping each other's faces off. And not having opposable thumbs, of course, really put a damper on technological development. Maybe in a another million years things would have been different, but the asteroid had a different idea.

We, on the other hand, are at least on the precipice of the capability to divert asteroids. Hopefully we don't get an asteroid visit too soon.

jrochkind1
I think it's a misconception that we can somehow speed up or direct "evolution", no matter what language and mathematics we have.

Diverting an asteroid, however, is not evolution.

Tossrock
Of course you can speed up and direct evolution, it's called artificial selection and it's how we got dogs, cattle, and almost all crops.
ben_w
One thing I sometimes wonder is: if there was a dinosaur species which had reached roughly our level of intelligence and society, would they have left enough of a mark on the world that we could even tell?

If they got further than us, tried to capture an asteroid and mine it, could they have wiped themselves out without leaving behind technosignatures that would still be visible?

ninjanomnom
While our time has been short on geological scales, that's a point in favor of being discovered later. So much has been deposited by us in the geological record in a stunningly short (by future geologists' viewpoint) timescale like plastics, temperature variations, and irradiated patches of land.

To make things worse for a hypothetical advanced dinosaur species, if they were at the point they could capture an asteroid, they would be roughly equivalent or better with us but we could survive an asteroid extinction event just fine. Society as we know it perhaps wouldn't survive but an event that could genuinely end us as a species would need to be exceedingly destructive or long term. Otherwise the remnants will rebuild, give or take a couple 10s of thousands of years, which is basically nothing on the timescales we're thinking about here.

Robotbeat
Humans left a ridiculous amount of stone tools which last basically indefinitely in the fossil record (and unlike bone fossils, don't require super special conditions to preserve them). So I think we'd have enough evidence from modern artifacts made of similar types of materials. A car buried under sediment would rust all out, but you'd be left with a big car-shaped bunch of rust as well as chunks of pure metal which better resist corrosion, like stainless bits or the platinum catalytic converter, various bits of glass and ceramic in very artificial looking shapes, etc.
feurio
Maybe the archaeologists of the future would weave scholarly narratives as to how these ferrous-based lifeforms lived, what their diet was and how they came to perish.

Film-makers would use stop-motion techniques to depict battles between Fordusprefectops and Chevroletcamaro-Rex whilst their own early ancestors look on, clad in loincloths and bras made from footwell mats.

jefurii
Like in David Macaulay's "Motel Of The Mysteries".
bmn__
https://news.ycombinator.com/item?id=29302827 "cars are the dominant species, since so much of our world has been dedicated to them"
7thaccount
This is called the "Silurian hypothesis" and is named after the Doctor Who episode that showed an advanced dinosaur race called the "Silurians" that went into cryo millions of years ago.

Some real geologists explored the idea (someone could find the paper and subsequent news articles) and I think the conclusion was that on geological time periods, there might not be much left for us to find.

easygenes
Pretty sure all the radioactive ore refining we have done is going to leave a mark for billions of years. Never mind how we’ve displaced large percentages of the readily available rare earths already.

I doubt the dinosaurs shuttled away or buried all their geo-engineering marks.

NateEag
Schlock Mercenary is a sci-fi webcomic with a fun thread about sapient dinosaurs fleeing Earth pre-impact.

There's no good way to read just those strips, but it starts here:

https://www.schlockmercenary.com/2018-07-25

Vetch
Yep, the [paper's conclusion](https://ntrs.nasa.gov/api/citations/20200000027/downloads/20...) was just as you said, there'd not be much of a record to find for civs older than 4 Ma. Also, there are past anomalous and abrupt events in the geological record that appear similar to byproducts from our anthropogenic activity. Evidence against is that timing for majority of such anomalous events can be matched to mundane geological activity in the record.

The rate at which we're accumulating change compared to geological record is also a strong argument against, although they argue limitations in current dating methods reducing how much can be said with certainty about prior epochs.

While there is precious little reason and evidence to believe a priori in a previous advanced dino civ, there are studies that could be done on sediment data that'd lend more certainty (such as looking for unusually rapid metal production).

> Anthropocene layer in ocean sediment will be abrupt and multi-variate, consisting of seemingly concurrent-specific peaks in multiple geochemical proxies, biomarkers, elemental composition and mineralogy. It will likely demarcate a clear transition of faunal taxa prior to the event compared with afterwards. Most of the individual markers will not be unique in the context of Earth history as we demonstrate below, but the combination of tracers may be. However, we speculate that some specific tracers that would be unique, specifically persistent synthetic molecules, plastics and (potentially) very long-lived radioactive fallout in the event of nuclear catastrophe. Absent those markers, the uniqueness of the event may well be seen in the multitude of relatively independent fingerprints as opposed to a coherent set of changes associated with a single geophysical cause.

My opinion is this ultimately boils down to how hard human level intelligence is to evolve, which is why the hypothesis is interesting in the context of the Fermi Paradox. Intelligence might be extremely difficult to evolve, it might require an unusual background environment set of condition or just might not be that useful in general.

Robotbeat
Humans could most certainly survive an asteroid if the humans worked at the technology and industrial capacity to do so. Including technology and industrial capacity involved in redirecting asteroids: https://www.youtube.com/watch?v=g7zdeQ-Uw8k

(To redirect Chixculub would require a MUCH larger capability, probably on the order of 10 million tons in orbit, but that's possible with a fleet of large reusable rockets capable of getting the cost to orbit down to around $10/kg, or equivalent development of in-space resource utilization capacity.)

scionthefly
You assume we would be able to organize the social, political, and financial impetus to take on a task like that in a unified way that has a chance to succeed. Recent events seem to say that we are at least as likely to just fight about it until the asteroid hits.

For greatest success I think there would need to be two but probably not many more than two major efforts going on simulatneously, in much the same way that CMS and ATLAS experiments at CERN were independently looking for the Higgs. If one fails for some unforseen technical reason, the other might not if they took a different approach.

kenfox
Coal and oil formation would not occur due to lifeforms able to digest trees. It would be extraordinarily difficult to reset to those primordial conditions. I don't think an asteroid hit could do it.
Vetch
Are you sure about this? I actually looked into this not too long ago and that theory no longer seems well supported.

Oil forms when sea plankton and algae are buried and exposed to high pressures and heat. Coal forms when dead plant material protected somehow from biodegredation (say by mud) forms peat and is then buried, and exposed to high pressures and heat.

I was also surprised to learn that the inabality of fungus and bacteria to degrade lignin is unlikely to have been a key driver of coal formation during the Carboniferous period, instead it was "a unique combination of everwet tropical conditions and extensive depositional systems during the assembly of Pangea".

Source: https://www.semanticscholar.org/paper/Delayed-fungal-evoluti...

api
I'm not sure I buy this. The main driver of the industrial revolution was intellectual: the emergence of science, classical liberalism, mercantilism, and modern economics. All this was in place before anything really took off.

Industrialization without fossil fuels would scale much more slowly with wood being used at first and then probably crops being grown for energy (biofuel). Once we figured out electricity we'd have large scale hydropower and wind power. Then we'd figure out either photovoltaics or nuclear fission, at which point we'd be off to the races. My guess is we'd be almost 100% nuclear and hydro powered right now with use of photovoltaics growing.

Stable power grids would probably take longer to emerge, but we figured out simple rechargeable batteries (lead-acid) fairly early. People would probably have banks of these in their homes to power minimal lighting and things like radios, TVs, etc. at night and run their appliances at specified times when the grid was at high power. You'd probably see a food system less dependent on refrigeration until stable grids emerged.

On extremely long historical timescales I suppose depletion of other elements is possible, but things like iron are incredibly common in Earth's crust. I'd be concerned more about rare elements.

sammalloy
> One of the interesting possibilities for the Fermi Paradox is the fact that we've wiped out the readily accessible deposits of iron, coal, oil, etc. A second go at the industrial revolution would be much harder, if we regressed that far.

It’s odd to me that we don’t see this specific point discussed all that much. Sure, it comes up here and there now and then, but I think it deserves an extended discussion. My understanding, based on what I’ve read about this implication, is that we as a human species only get one chance to evolve beyond the gravity well of our planet. If we fuck this up, and by all accounts it looks like we are, we will be condemned to extinction on Earth.

akira2501
I doubt that. There's plenty of Iron that doesn't require mining to find, and it naturally accrues in the environment over time: https://en.wikipedia.org/wiki/Bog_iron

We stop mining coal when it is no longer economical to do so, not when the mine is entirely depleted. There's a bunch of coal at or near the surface and will be for quite some time.

The same goes for Oil. We extract that which is easiest to extract and fraction into the products we desire, preferring to leave things like the energy intensive and more polluting "Tar Sands" behind.

marcus_holmes
Japan has very low quality iron ores. Yet they turned out (arguably) the best steel swords in the world.

Constraints don't always lead to bad outcomes.

short_sells_poo
I thought Japanase steel wasn't all that impressive in absolute terms, rather it was impressive because of the very bad quality ingredients they started with. It also made ownership of a sword (katana) and accompanying paraphernalia only accessible to a small caste of elite warriors. Both the raw materials and the process were hard to come by.

My understanding is that historically the best steels were made in India/Southern-India where wootz steel comes from and that for more than 2000 years the rest of the world was almost bargain tier in comparison. To the degree that samples of wootz steel were brought back to Europe even in the 18th century in an attempt to replicate the process.

I'm only an amateur metalworker though so I hope someone more knowledgeable can correct any errors.

m4rtink
The pre-modern/medieval Japanese iron industry was actually quite massive! We went to a museum in Izumo and there was a nice map showing the are in ancient times and now and the difference was pretty stark - there were just swamp where the Izumo city is today and the local lake Shinji was like twice as big as today.

All the new land and end of the swamps is apparently the result of hundreds of years of iron ore mining in the nearby mountains. So even with primitive means and shitty ore, if you need the material and go at it for centuries, you can achieve substantial results. Not to mention reclaim some land as a result. :)

Also in related (but much more recent development) there were coal mines in Japan mining from under the seabed via tiny islands!

Hashima is the most extreme example, basically a piece of barely dry rock that has been converted to a concrete city housing many thousands of workers and their families: https://en.wikipedia.org/wiki/Hashima_Island

But there were other such mines, some of the local ones even connected to Hashima via the underground works!

ceejayoz
There's a difference between "we have to do more refining to get usable metal" and "there's literally none around unless you go deep underground into a new deposit".
PeterisP
The buried and overgrown remains of any modern junkyard, port, railway depot or rubbish dump would be a decent shallow ore for many metals. During the industrial revolution we have consumed almost all of the shallow fuels, but we haven't consumed any metals, they're right here on the surface and more accessible than before - it's just that they're currently tied up in some products or structures we use.
WalterBright
A sword is a tiny amount of metal compared to, say, a steam engine. Or a battleship.
marcus_holmes
Japan fielded armies of hundreds of thousands of men, all equipped with swords. I guess if you totalled the weight on metal on them, you could cobble together a battleship. But it's not really about quantity, more quality of the ore.

And the point isn't about that. The point is that the constraint (bad ore quality) forced the Japanese to get better at metalworking. Resource constraints aren't as bad as we think, because we're used to making things without those constraints. But our descendants, having always had those constraints, will find better ways of solving them than we can think of.

WalterBright
Hundreds of thousands? The battles I read about maybe reached tens of thousands, and who knows how many had swords.

Even in Europe, a large part of the armies were peasants with whatever comes to hand. Their resource constraint on metals wasn't the ore, but the availability of the enormous quantities of wood required to process it.

marcus_holmes
Wikipedia says 400,000 samurai: https://en.wikipedia.org/wiki/Samurai#:~:text=In%20the%20187....

Every Samurai had at least two swords. So at least a million swords. Let's say a sword weighs 1Kg, that's 1000 tonnes of steel. So probably not enough for a battleship, but maybe a small destroyer.

bmn__
> arguably

Experiment: <https://youtu.be/ev4lW0wbnX8?t=1245> (German audio track, machine translated subtitles in English available)

The question is whether this settles the argument or stokes its flames.

stevbov
They're mostly the best in pop culture. There's this weird view that European swords are heavy and brutish. In reality a Katana doesn't weigh any less than a long sword.
isk517
Japan definitely produced some of the most beautiful looking swords, and they are extremely impressive given the quality of iron they are forged out of, but I don't think any sane person would choose one to take into battle given any other option.
lowbloodsugar
I am confused. Are you saying that the quality of the iron makes them shit swords, or that guns are better than swords?
isk517
From what I've heard and seen the quality of metal makes them prone to breaking, and you need skill to take full advantage of the razor sharp edge. I have seen a great video showing katana students cutting bamboo and struggling and then a master going clean through that I wish I could have found again. For the record I don't think they are shit swords, just they look really cool and that has lead to various media elevating their superiority to other swords beyond reality.
fancifalmanima
It would depend a lot on what time period you're talking about. I'm not an expert, but I do a bit of amateur forging and have learned a couple of things just reading about this craft. By the 14 or 1500s, spring steel had been developed in Europe. This would enable a sword to flex rather than break. Japanese swords from the time weren't flexible and were generally more prone to breaking. This had a lot to do with the raw materials that were available. They also tend to have softer mild steel core to help with this problem, with a very hard and sharp edge. If the entire blade were made of the same material as the edge, it would be extremely inflexible and brittle. They're kind of designed in a way where a part of the every hard and brittle edge can crack, but maybe it won't extend all the way up through the blade leaving it somewhat usable. The European longsword from the time might just flex in the same circumstance.

That's not to say that European swords were better than Japanese swords in every way. This is one of many, many factors. And I'm sure there were plenty of crappy longswords at the time (and crappy katans), so you kind of also have to decide if you're comparing the best examples, average examples, or low quality items as well. The skill of the wielder is also important. If you're throwing out a bunch of random soldiers without a ton of training and giving them a sword, you might want to give them something they're less likely to break. My understanding that is there was a period in Japan where only samurai were allowed to carry swords (if my reading is to be believed), who were generally very skilled. They would probably know how to avoid putting their blade in situations where it would be prone to breaking.

And Japanese traditional Japanese sword making techniques are extremely impressive and interesting to read about given the materials that were available at the time.

bostik
Friend is a hobbyist blacksmith and he said it really well.

Japan developed their hugely overdone turned-steel technique because the ore they had to work with was so bad: hammering the garbage out was the only way to get the quality of the steel itself into acceptable levels. As a result, Japanese smiths developed something very close to what we'd now call layered steel.

European swordsmiths (think: Toledo) had access to higher-grade ore, and as a result never needed to develop techniques to work around fundamental problems with their source material.

marcus_holmes
This is my point. Constraints sometimes take us to places where we otherwise wouldn't have gone.

The world is (imho) a better place because Japan has bad iron ore. If that wasn't our reality, we would never have guessed it.

gpm
Coal, yes. Iron, wouldn't the iron we moved to the surface be even more accessible? We didn't destroy it, just rearranged and concentrated it?
m4rtink
Stone coal could be, at least partially, substituted by charcoal - and often was in the past, where reachable underground coal reserves were not available.
mcguire
"So by weight charcoal and anthracite coal have an energy density of about 30 MJ/kg, while poorer kinds of coal range down to half of that." (https://www.reddit.com/r/askscience/comments/udjl5/charcoal_...) However, the density of charcoal is much less than that of coal: 200 kg/m^3 vs 1500 kg/m^3 (for solid anthracite).

Producing 1kg of charcoal requires 3-4kg of wood. (Producing the 900°C for the process is an exercise for the reader.) (https://www.fao.org/3/y4450e/y4450e11.htm)

ceejayoz
Guess what we do most of our iron smelting with?
mas147
What?
x1f604
Primitive iron smelting was done with charcoal, I believe. Correct me if I'm wrong.
f00zz
Yeah, I think we switched to coke during the Industrial Revolution not because it's a better fuel than charcoal, but because we were running out of trees.
stan_rogers
Coke was Abraham Darby's doing (around 1709-1710), and that was mostly to corner the market for cheap pots and kettles. There was no way for the charcoal crowd to compete on iron, and the bronze bunch - the norm for that sort of thing up to that point - was left forever in the dust.
ceejayoz
Yes. Being stuck at that point would be the problem.
Robotbeat
Would it? Most new smelting plants today in the US (any built in the last few decades) use a mixture of hydrogen and carbon monoxide as the reducing gases (natural gas primarily as feedstock, but no reason it couldn’t be about 90% hydrogen produced via, say, hydroelectricity or wind).

Coal didn’t overtake charcoal for smelting iron in the US until the latter half of the 19th century, well after the first industrial revolution.

Melting down scrap iron is one of the main sources of steel in the US, and that is done straight with electricity in arc furnaces.

Coal accelerated the second industrial Revolution, but it was not essential. Far more important for enabling the first industrial Revolution was some of the early scientific knowledge about steam and pressure, such as the work of Robert Boyle, a lot of that based on a sort of reaction to the classics that had been revived in the Renaissance. The biggest argument for coal is indirectly in that it helped the viability of British society (after the island had most its tree cut down over the previous 500 years) which played an important role in the Scientific Revolution (Robert Boyle was Anglo-Irish)… although by the time Britain was playing an important role, the scientific Revolution was already underway on the mainland of Europe. As long as our books are not all destroyed, I think we’d have no problem bootstrapping from charcoal the second time around.

(I think a lot about long term data storage… writing in stone or fired clay still seems like one of the best methods for writing that needs to last 10,000 years… it was, after all, preserved Greco-Roman classics that enabled the renaissance and therefore the scientific Revolution.)

randmeerkat
Or instead of stone tablets you could simply build a 10,000 year clock.

https://www.businessinsider.com/everything-you-need-to-know-...

ceejayoz
I think that vastly underestimates the dependency tree in modern society. Storing hydrogen in useful quantities is tough, requiring fairly sophisticated metallurgy and cryogenics.

Finding out we've got a hard to replace left-pad module somewhere far up the tech tree wouldn't be fun.

Robotbeat
The dependency tree of 19th Century or early 20th century society is a lot more straightforward, however.

And no, you don't need such sophistication for storing useful amounts of hydrogen. Storing large amounts of hydrogen (in this case, also mixed with poisonous CO) was solved in the beginning of the 19th Century (well, late 18th century) in Britain and Germany by using very large near-atmospheric storage vessels called Gas Holders: https://en.wikipedia.org/wiki/Gas_holder

Salt caverns can also be used for greater volumes, i.e. for seasonal storage, as are already used for hydrogen storage in a few places in the US and elsewhere. https://en.wikipedia.org/wiki/Underground_hydrogen_storage

mcguire
That gas holder article says they contained methane or coal gas. Methane's density is 0.657 kg/m³; hydrogen's is 0.08375 kg/m³.
Robotbeat
Coal gas is a mix that (by energy) is about half hydrogen, as I said. And hydrogen gas a specific energy of 142MJ/kg vs 55.5MJ/kg for methane.
Retric
Except the Iron Age started around 2000 BC, so the world would largely be without iron for a very long time.
masklinn
> Coal, yes. Iron, wouldn't the iron we moved to the surface be even more accessible? We didn't destroy it, just rearranged and concentrated it?

Might depend how long it takes: because rust is porous and friable, rusting iron should eventually degrade to nothing, and the rust would be difficult to re-concentrate then reduce back to iron.

gpm
Would whatever the rust turns into be any less accessible (ignoring availability of coal) than what we started out with though? I don't pretend to know the entire "iron cycle", but it seems like it ought to just be turning back into the same sort of minerals that we originally extracted it from?
masklinn
Iron rusts over historical timescales, metal deposits form over geological ones.
gpm
Am I wrong to think of metal deposits as just "rust mixed with rock"? It doesn't seem like the rock part (i.e. the details of what it is mixed with) should be critical for the extraction process?
masklinn
> Am I wrong to think of metal deposits as just "rust mixed with rock"?

They're concentrated rust mixed with rocks, otherwise it's not economically viable to extract.

Like, iron is ridiculously common, relatively speaking: on earth as a whole it's more common than oxygen, for the crust it ranks 4th at 5% by mass, meaning if you went at it randomly you'd need to sift through 20kg of materials to get 1kg of iron.

Currently, we exploit formations as low as 15% iron (banded iron formations / taconite), that's the lower limit of the economically feasible, and those results in absolutely enormous amounts of tailings (waste materials).

Pre-industrialisation, unless you had no other choice (e.g. only had ironsands to work with) you really wanted to exploit natural (or "direct-shipping") ores, in the 60~70% range, the extraction is way too much work otherwise.

mikewave
Sure, if you have a billion years to wait for it to all run through the rock cycle again.
ncmncm
The process that concentrated the iron ore we rely on does not operate anymore.

Fortunately, despite millions of tons of production every year, we are nowhere near using up the ore.

evilduck
The problem will be a lack concentrated deposits making post-collapse (and new) industrial efforts much harder. A pile of rust from one tractor in someone's back yard is not going to be worth the effort to mine and refine.

Possibly an analogous situation is the history of steel in Japan and their efforts to extract iron from sand, since they don't have significant iron ore to mine on their island.

gpm
The pile of rust that use to be a tractor, sure, that's not worth much. The pile of rust that use to be a city though... surely that's more concentrated than the rust mixed with rock (or clay) that we originally extracted it from?
evilduck
Most likely not: https://en.wikipedia.org/wiki/Iron_ore, it's almost 50% iron in the worst case and can be up past 70% in the best case. The geological processes that form this ore in the mantle do a pretty good job of concentrating iron by density but it takes geological scales of time to do so and have it lifted back up through the crust. We might have to excavate a lot of earth to get to the concentrated vein of ore but when we find it it's relatively easy to chase and turn into usable iron once you know a little bit about mining and smelting as a society.

A city decayed to rust is going to be a thin layer of iron spread out over miles with some hot spots like where a building once stood (but presumably without a map of the city in this distant future scenario), but there won't be a vein of concentrated ore. Distributed rust can definitely be turned back into pure iron but the energy requirements are going to be substantially higher to do so since you're going to have to sift through much more material to collect it, more material to separate and concentrate it, more material to smelt off, and your operations will have to be more mobile to retrieve it over a larger area. That's why I think retrieving iron from our society will be more like extracting iron from ironsands (2-20% iron), and it will have similar effects on that subsequent society that sits between us now and some future point where geology has re-supplied it to the surface millions of years from now.

masklinn
> Most likely not: https://en.wikipedia.org/wiki/Iron_ore, it's almost 50% iron in the worst case and can be up past 70% in the best case.

FWIW those are the ratios for the oxides themselves but the formations are not necessarily huge piles of pure oxides, if you go a bit lower to the "sources" section the lowest-concentrated formations viable for exploitation are

> Banded iron formations (BIFs) are sedimentary rocks containing more than 15% iron composed predominantly of thinly bedded iron minerals and silica (as quartz).

However that's only for post-industrial societies, at least if you have alternatives, as it requires churning through ridiculous amounts of materials.

When you don't have alternatives the ironsand article (which would be used in places with no good or accessible ore deposits e.g. japan, famously) quotes

> Sand used for mining typically had anywhere from 19% magnetite to as low as 2%.

though much like gold panning the ironsand would be sluice-separated to a concentration of 30-50% before it was further processed.

Most ironsands deposits are not considered financially exploitable to this day though, with the exception of NZ's where the iconic "black sand" beaches of north island are extremely rich in magnetite (up to 40%).

throw0101a
> how technology can regress

Anyone interested in 'rebooting' society after a major collapse can check out:

> The thirteen chapter book starts off explaining how humanity and civilization works and has come to be and how this could possibly be altered in the event of worldwide disaster — such as avian flu. Leaving us with the essential question of what knowledge would we need to rebuild civilization as we know it, which Dartnell answers by looking at the history of science and technology.

> Dartnell explains and realistically details a 'grace period' in which survivors can salvage food, materials and tools from the ruins of today's society. However, after a certain point this grace period would end, and humanity would have to produce their own food, make their own tools, practice hygiene and fight infection to maintain health, and develop energy stores for a new society to survive the aftermath.

> The book covers topics like agriculture, food and clothing, substances, medicine, and transport. Darnell points out that applying the scientific method to basic knowledge will enable an advanced technological society to reappear within several generations. Along with giving the history of scientific invention and how that applies to humans were they to recreate that, the book also offers anecdotal bits of information in the form of endnotes. Giving facts such as how carrots were originally white but grown orange in honour of the Dutch royal family, and how onions are the leaves of the onion plant.[3]

* https://en.wikipedia.org/wiki/The_Knowledge:_How_to_Rebuild_...

Full bibliography available if anyone wants to dig into a particular topic:

* http://the-knowledge.org/en-gb/the-book/

hnmullany
Same way - the extended 3rd century crisis in the Roman Empire led to a loss of sculpting expertise. No art was commissioned for so long that skills weren't passed on.
bobthechef
An example closer to home, though perhaps not as stark, is the loss of expertise in various industries that have been outsourced. The US is a good example. This is one (of many) arguments against outsourcing your industry just because it's cheaper and increases the profits of the outsourcing company. The tradition and culture that allows a certain industry to flourish is interrupted and destroyed and rebuilding that is no small task.
toss1
Yup, this is a massive strategic blunder of historical scale.

While the US and western countries are asleep at the switch and think that they are exploiting cheap Chinese labor, the Chinese have a 100- and 500- year plan and are exploiting our myopia for short-term profits to gain manufacturing expertise and military advantage. It may not be too late to reverse, but it is close.

It is really the result of not thinking ahead and letting the business lobbies have what they want today, instead of putting long-term strategic considerations first. Different incentives, different results, tragedy of the commons all over again.

ch4s3
A 100- and 500- year plan is ludicrous on it's face. They don't even credibly hit their 5 year plans a lot of the time. The idea of planning something so large and complex so far into the future seems like wishful thinking at best.
scrumper
I'm not sure I agree with that. Misses on short-term plans are a different concern to progress on a multi-century plan. Pig iron production might've missed its forecast for 2020, but it'd undeniable that China is building tremendous expertise in advanced manufacturing - and at a similar rate to that at which the west is shedding it.
ch4s3
Th narrative of "the west doesn't manufacture anything" is greatly oversold. The US makes more steel than it did 40 years ago, for example. Sure, we make fewer hairs and t-shirts, but it's natural for that stuff to chase lower labor costs. We're also an agricultural powerhouse.

Now, for sure we have fallen a bit behind in making chips, but that may change.

lowbloodsugar
So we are the Romans. We can make swords and bread! We'll do great in WWIII!
ch4s3
That's ridiculous. We're churning out new ideas in biotech, medicine, media, finance, automobiles, airplanes, batteries, some solar stuff, and on and on. We have a huge, dynamic economy that does a lot of things really well. We've uncovered some major issues in the last two years but we've done better I think than one might have expected under the circumstances. I think in particular the rapid development and production of multiple vaccines in record time displays our capacity to innovate and manufacture complex goods.

It's impossible to know what's coming in the distant future, but it doesn't feel like any of our problems are insurmountable.

toss1
We are actually getting quite behind on some of that stuff that you mentioned. Boeing is a shell of it's former self, having been hollowed out by the financial types, and can't launch a new aircraft without killing hundreds or a new spacecraft without an order of magnitude more time and money vs SpaceX. China absolutely leads the world in solar cell production, and of course most critical microchip production is offshore.

WWII was won largely on American manufacturing prowess. As the war started in Europe, we had a grand total of 39 tanks. But American manufacturing might was focused on the war effort. Liberty ships launched at a rate of two every three days. 25-35K tanks rolled off the US assembly lines every year, while the Germans could produce only 3K, 5K, 11K, and 18K in 1941-44, respectively. Etc., etc, etc.

However, right now, the US produces barely any microchips, which are pretty much critical to every technology.

Worse yet, having a shortage of conventional technology, such as smart weapons shortens the time to the point where the choice becomes to escalate to nukes or lose.

But thank you for providing a fine example of the "it'll be fine" sort of myopia that brought us to this mess -- it always feels nice to look at our advantages and think everything else is a tail risk. But when the tail risk happens, it's over.

ch4s3
It isn't myopia at all to think that near term conventional large-scale nation-state warfare is unlikely. I'm well aware that industrial giants of the last half century aren't as dynamic as they once were, but it hardly matters. We have tons of talent and capacity, it isn't unlikely for newer and better companies to be built.

The only important thing where we seem to be at a serious disadvantage is in chip fab, but there are several companies looking to break ground on new facilities in the US in the next few years.

Trying to hand-wring over an unlikely war seems unproductive. There are two major powers today, and we have an extant model in living memory for how to keep tensions below a critical level. The Chinese plan to compete with the West via the Belt and Road initiative seems unlikely to lead to anything other than economic conflict.

toss1
Belt & road, ya, that isn't a big threat yet, but the ongoing relentless expansionism is a real threat. They have never stopped, and barely slowed down.

Sure, continued appeasement can 'keep the peace'. But it will be at the cost of sacrificing Taiwan and everythign in the 9-dashed line to the fate of Tibet, Hong Kong, and the Uyghurs. And then whatever else nearby that they will decide to fabricate a claim for. And so forth, and so forth, and so forth.

That is the bargain that authoritarians always strike - constantly cheating around the edges - what's mine is mine and what's yours is up for grabs.

So, people not thinking strongly or long term can argue that "it's unlikely", "it's not worth a conflict", etc. Meanwhile more territory and people fall under authoritarian control.

If it is unlikely, it is because people want to keep their heads in the sand and appease instead of make current conflict. It's a fools game

lowbloodsugar
When WWIII happens it wont matter how many weapons you have at the start of the war. What will count is how fast you can make more weapons. That's all that matters. The war ends one one side runs out of weapons (or soldiers but China has a bit of a lead there). We can't even make new cars when supply from China is reduced. If you think having aircraft carriers is going to matter then you've not been paying attention. [1] The future is technology. Technology requires chip manufacture. That happens about 100 miles off China's coast, or rather, in China's opinion, on a Chinese owned island 100 miles off the mainland.

  [1] https://en.wikipedia.org/wiki/Millennium_Challenge_2002
ch4s3
Because we hit max depth, I'll respond to your bizarre fever dream about the Millennium Challenge and some hypothetical war with China. The only things that matters in a war between nuclear states is that both sides view it as too terrible to entertain. Nuclear subs mean that the US always has the option to deal a killing blow even if everything else fails. And if it comes to that, nothing else matters anyway, so why worry about it?

They can go on fancifully pretending to plan for 500 years from now, and we'll continue to innovate, live well by comparison, and at the end of the day there's relatively little reason for us to have a major conflict.

toss1
>>...at the end of the day there's relatively little reason for us to have a major conflict.

...at the end of today. <<FTFY

today, there is not, but autocratic regimes create their own reasons to have major conflicts. They are expansionist, and will keep at it until stopped. Moreover, the longer they are allowed to engage in bad behavior, the larger the conflict that will be required to fix the problem. CCP already has a long history with expansionist actions in Tibet, HK, Taiwan, the 9-dash line, is running concentration camps for Uyghurs, and working it's Belt and Road plan to capture poorer nations in debt traps. Yet their rhetoric is all about how the US is being the aggressor when it practices Freedom Of Navigation exercises in international waters, or is "interfering in internal affairs" by working w/Taiwan.

They will not let these topics go until they win, and they will always find new excuses to expand. Their excuses for current expansion are 100% bullsh*t, why would you expect them to not make new ones when convenient? It's their standard mode of operating.

If you think this will somehow end by itself, you make many wrong assumptions about how autocracies work, or you know something that no one else does about how to stop this, and should share it in the interest of world peace.

toss1
>>...at the end of the day there's relatively little reason for us to have a major conflict.

...at the end of today. <<FTFY

today, there is not, but autocratic regimes create their own reasons to have major conflicts. They are expansionist, and will keep at it until stopped. Moreover, the longer they are allowed to engage in bad behavior, the larger the conflict that will be required to fix the problem. CCP already has a long history with expansionist actions in Tibet, HK, Taiwan, the 9-dash line, is running concentration camps for Uyghurs, and working it's Belt and Road plan to capture poorer nations in debt traps. Yet their rhetoric is all about how the US is being the aggressor when it practices Freedom Of Navigation exercises in international waters, or is "interfering in internal affairs" by working w/Taiwan.

If you think this will somehow end by itself, you make many wrong assumptions about how autocracies work.

lowbloodsugar
I mean I think you hit the nail on the head with nuclear, and I hope the war won't go nuclear. But the war will result in the complete impotence of the USA. China will become like the British empire: its influence will be everywhere. The USA will have no influence. In 50 years, at the outside, the USA will have a worse standard of living than China. China will out innovate us. Out sell us. Out market us. Out maneuver us. We've lost the Philippines, but that's ok because its far away. We'll keep saying that until Mexico is part of the Chinese empire.

The point is that the war wont go nuclear. It'll say conventional, and that's why we'll lose. We won't have the bottle to pull the trigger. Think you can bluff the Chinese? Lol. "Oh, we are really sorry that you aircraft carrier got sunk in an amazingly tragic accident involving our hypersonic missile test. That is terrible." "Ok, but this is the fifth time, if it happens again we'll nuke you, we swear."

Really the only conclusion I can form from the USA's willful inaction is that the politicians are already bought and paid for.

>and at the end of the day there's relatively little reason for us to have a major conflict

That's exactly what will be said in the USA. "Well, they've put missile bases on these man-made islands, but that's little reason to go to war." "Well, they've bought and paid for contracts that would previously go to US companies through bribery and corruption (or rather spending more on bribery and corruption than we did), but that's little reason to go to war." "Oh, they sunk a carrier and they're really sorry about it, but that's little reason to go to war".

One day the world will be China's, and the USA will have spent fifty years repeatedly, impotently, drawing lines in the sand.

Put another way: The Soviets had nukes, but by the 1980s we'd beaten them. Products all over the world had Made in USA written on them. US products, US money all over the world, protected by US military technology. Now swap Russia with USA, and USA with China, 1980 with 2040. 2030?

toss1
“In preparing for battle I have always found that plans are useless, but planning is indispensable.” — Dwight D. Eisenhower

"No plan survives the first shot of the battle".

So, sure, the plans will likely not survive in any detail even a decade from now.

But the fact that they are making plans, attempting to understand the considerations of those future generations, understand what strategic goals need to be worked on now to help that, and more — this is critical.

I contrast, western politics and decision making tends to focus on considerations that are at best hot for the next election cycle.

And notice that the US is not now trying to return manufacturing home because of long-term plans, but because the people noticed that the bargain of cheap goods from China doesn't mean much when you exported the job. The fact that China now makes and has access to key components in some key military systems and that needs to be reversed barely enters the mind of the electorate.

ch4s3
> And notice that the US is not now trying to return manufacturing home because of long-term plans, but because the people noticed that the bargain of cheap goods from China doesn't mean much when you exported the job

Most of those jobs outside of a few narrow categories were lost to automation. We make more steel, aircrafts, and cars than ever but with far fewer people.

China's history of planning has led to tons of misallocation of resources, and I'm not sure I'd want to emulate that.

toss1
>>Most of those jobs outside of a few narrow categories were lost to automation.

Of course automation plays a part, but it is nothing like the whole story. China now has far greater capabilities to make critical components from chips to solar panels than is located here. Look what happens to our supply chain when supply is just reduced a bit. What do you think would happen to our economy in a conflict?

>>China's history of planning has led to tons of misallocation of resources, and I'm not sure I'd want to emulate that.

Of course we wouldn't want to emulate that; that's a strawman argument.

But to use that argument to avoid planning altogether is foolish. And yes, resources will APPEAR to be wasted. Keeping more than minimal capabilities immediately at hand is "inefficient".

This is the same as exercising and eating right to stay in good physical condition - no one needs to be able to lift weight and run fast for 99.99% of modern jobs. Yet it is a good idea for many reasons. By your argument, why bother to save for emergencies or tough times? That's inefficient - you should enjoy your full income right now - heck, go into debt too!. Also by your argument, we shouldn't plan to avoid technical debt - that's inefficient waste of programmer time - just write the first thing that comes to mind to deploy the feature.

yyyk
>It is really the result of not thinking ahead and letting the business lobbies have what they want today

There's a tendency in the West to airbrush history, and pretend that the opening to China was just a result of business lobbies. There was an intentional policy decision based on 'liberalization through trade' theory, which managed to be one of the most disastrous theories out there (far more disastrous than 2000-era US WoT policies). This was pushed by politicians and academics, and up until the oughts businesses were more reacting to the policy as creating it.

toss1
YES, this is definitely a huge factor that I failed to mention - the concept that economic free(ish) markets are incompatible with authoritarianism and would lead to political freedom.

From a BBC article yesterday [0]:

"Indeed the US trade representative responsible for negotiating China's WTO deal, Charlene Barshefsky, told a Washington International Trade Association panel this week that China's economic model "somewhat disproved" the Western view that "you can't have an innovative society, and political control""

I can sort of see how the idea took hold, as the previous large authoritarian states in USSR and CCP had so dramatically and consistently underperformed (to be polite) the democracies that it would seem that the converse would be true, that economic freedom would force democracy.

So, ya, it would seem like a good hypothesis, but I sure wouldn't bet a lot on it without solid testing.

Yet, the political leaders all turned out to be fools and bet the entire future of democracy itself on that notion.

Then, combine that with the MBA idea that the only thing important is the ideas and management and trifling things like manufacturing can be outsourced to their cheapest locale and that manufacturing know-how, innovation, and supply chain control don't matter, and we've come to the brink of destroying the free world.

Now, we have a historical fight on our hands to recover from these blunders.

[0] https://www.bbc.com/news/business-59610019

legutierr
Time will tell as to whether it was a blunder or not, but the policies that incentivized outsourcing were implemented under duress, in a sense. The thought was, if we make the world economically interdependent, then it is much less likely to go to war.

It is not unimaginable that the next Great Power war would itself be civilization-ending, due to the use of nuclear weapons. What's the benefit of having a strategic manufacturing capability, then, if by maintaining that capability you increase the likelihood of war (and by implication, nuclear conflict), compared to the alternative?

wpietri
An important factor here is not so much the business lobbies, but how we reward the decision-makers. CEOs make most of their money from short-term stock price numbers, something that also determines how long they last in their jobs. Combine that with declining CEO tenure [1] and the incentives are really clear: Do anything that will make the numbers look good in the 1-5 year time frame, get maximum money, and GTFO.

A lot of these problems would go away if CEOs were paid a modest amount of cash to live (say, $1m/year) and then the rest of their compensation was in stock that was locked up for at least 20 years.

[1] https://www.pwc.com/gx/en/news-room/press-releases/2019/ceo-...

jimhefferon
I had a summer job one year working on space stuff. I was in the clean room and the first day they took me back there, white suit and all, and took some ball bearings off a wire rack. They were maybe a foot in diameter. My job that summer was to test to see which were the least noisy. (Basically, they rotated very slowly, and there was a phono needle resting on the outside with a strip chart measuring the vibration caused when balls hit each other, etc.) The best ones were going up.

They told me to be careful. These were the rejects from another project, but this project was legally required to use only US tech and the US no longer had the ability to manufacture bearings this large, since we had outsourced for some time and everyone who could do it got out of the business. So we needed these exact ones. (This was the late 70's.)

jlkuester7
I was just reading an article on a nuclear power plant built (in Norway?) recently that mentioned how it was a considerably more difficult/costly project due to the fact that there was not sufficient expertise left in the West since so few reactors had been built over the past decades....
duxup
We had a solution to scurvy in the late 1400s.

And yet it was “lost” (for a variety of reasons) and was still killing people as late as 1911.

https://idlewords.com/2010/03/scott_and_scurvy.htm

Conlectus
I found this to be a compelling counterargument to Blow's alarmism about forgotten knowledge in tech https://www.datagubbe.se/endofciv/

A related point: numerically there are far more low level developers now than there were in past he idealizes. No such knowledge is being forgotten, it is used and innovated upon regularly. It may be in less frequent use, but is still there if needed.

None
None
bcrosby95
I've seen that talk but it feels like putting the cart before the horse. The risk isn't in programming, it's in the CPUs themselves.

C and ASM are still some of the most popular languages in the world. But for a modern CPU, there are machines in the production process that only a single company in the world can make.

We're infinitely more likely to lose the capability to make a modern CPU than lose the capability to know how to code in C.

bmn__
> We're infinitely more likely to lose the capability to make a modern CPU than lose the capability to know how to code in C.

I agree with this. I want to add that I think if the knowledge of modern CPUs is somehow lost, it won't be catastrophic, merely crippling, since there are literally tens of thousands of CS students every year learning how to build a CPU from electronic circuits.

We will revert to slow and bulky CPUs, be able to run C on them, and in due time rediscover and reëngineer miniaturisation, superscalar multithreading, etc.

LeifCarrotson
The machines at the very top are the only ones at the very top, yes. But there are dozens of manufacturers and fabs building useful ICs slightly shy of the bleeding edge. Lots of microcontrollers, general-purpose ICs, and special purpose ICs are still very frequently made on 22nm and 45nm scales.

And most of the hard trial-and-error discovery and experimentation has been done already, so it should not take 50 years to recover 50 years of historical progress. A process from 1970 can be done in the garage with 'just' a microscope and projector (and a lot of skill and hard work!): http://sam.zeloof.xyz/second-ic/

Balarny
I was hoping if I asked my other half who is an academic in the Classics he'd say this is untrue and I could then reply "well ackshully...". Alas it is true.
qwertyuiop_
Technology can regress by idiotic initiatives like banning advanced math

https://reason.com/2021/05/04/california-math-framework-woke...

mc32
Do we have any contemporaneous record of this mechanism, or have references to the device not survived for some reason?

If such a device was considered advanced or cutting edge or of note back then then wouldn't we expect some reference to the device?

mudita
According to Wikipedia similar devices were mentioned for example by Cicero, Archimedes supposedly wrote a now lost manuscript on the construction of devices like the Antikythera mechanism...: https://en.wikipedia.org/wiki/Antikythera_mechanism#Similar_...
naikrovek
very little from the past survives through to today.

I don't think there is any record of this device existing other than the device itself, and mention of it after its discovery in the early 1900s.

to some that will be proof that it is not truly an ancient device, and I think that is hogwash. most things just don't last that long, especially paper, which is where mention of the device would be found, if it ever is found.

mc32
I don’t doubt it’s ancient. I’m not skeptic from that POV. I do find it curious that such an object would be exist but not have some fanfare around it. I’m sure there’s an explanation that eludes me. Could have been developed in some secrecy for example because it have the people who used it some advantage?
leephillips
I think naikrovek explained it. We have but a few scraps of information from the ancient world. There might very well have been a fanfare; it might have been a huge deal—and we might still have no record of any mention of it.
naikrovek
or maybe it was just a fairly common thing, then?

the math used in the device is not complex, nor is its construction. it is only impressive to us because of what we assume about cultures of that time: that they are dumber than we are, less intelligent.

they were just as smart as us, but they were far fewer in number, and had many more limitations than we have when it comes to the library of technologies and skills they can call on to accomplish their goals.

jazzyjackson
Thinking of the most elaborate clockwork built today, those quarter million dollar wristwatches, there isn’t much written about them, maybe a youtube sizzle reel at most. Their customer base is small and they have little reason to document the inner-workings.

Could be this mechanism was just an exquisite commission for a wealthy dude to keep on his boat.

adolph
Darwin College Lecture Series: Decoding the Heavens: The Antikythera Mechanism by Jo Marchant

"There are quite a few mentions of devices that sound a bit like the Antikythera mechanism"

https://youtu.be/Iv-zWbxm2lY?t=2695

Jo Marchant is an award-winning science journalist and author of several popular science books including Decoding the Heavens: Solving the mystery of the world’s first computer and the New York Times bestseller Cure: A journey into the science of mind over body (both shortlisted for the Royal Society science books prize). She has a PhD in genetics, and has worked as a senior editor at New Scientist and at Nature.

KingOfCoders
Not sure if it is a myth, but hadn't we forgotten to build Saturn V engines?
ihattendorf
IIRC we have (at least most of) the drawings, but they don't specify tolerances like modern drawings do and were more for reference as parts were developed at specific factories with existing molds. It's more the fabrication knowledge that would need to be rebuilt.
0x138d5
In addition to that, a lot of stuff was crafted by hand with little or no documentation as to what was changed (no 'as-builts').

e: How NASA brought the monstrous F-1 “moon rocket” engine back to life (https://arstechnica.com/science/2013/04/how-nasa-brought-the...)

m4rtink
Not to mention there being a substantial advancements in engineering that make the F1 engines used by Saturn V basically obsolete.

The Space X Merlins have a much better thrust-to-weight ratio in the same gas generator cycle class, the Russian RD-180 powering the Atlas V is using oxygen rich staged combustion and is much more efficient.

And the current trend seems to be clearly liquid methane and liquid oxygen, covered by the phenomenal Raptor engine from Space X, the BE-4 from Blue origin and many smaller ones.

So hardly any regression on the chemical rocket engine front - pretty much the opposite, thankfully!

On the nuclear thermal rocket front on the other hand - yeah, we really did regress there. :P From almost flight ready NERVA examples in the 60s/70s to basically nothing even remotely flight ready today...

scj
I'd add to his argument, that the problem with software and uptime is that every time we add a layer of abstraction or library, the five 9s factor may apply.

    Math.pow(.99999, 1) - One dependency.
    Math.pow(.99999, 2) - Two dependencies.
    ...
    Math.pow(.99999, N) - N dependencies.
And that assumes everyone is aiming for 99.999% uptime. Which isn't true.

There's other factors, but that's the one I'd point out.

chilling
Really nice presentation. I think the bottleneck here is that developers creates tool for other developers to simplify their job and few decades later we end up with lots of handy and easy to use tools that just mask the real toughness of the problem. You can see it easily in web development with tools like React or even the CSS which is (IMHO) full of nasty hacks.
zuj
Also, I, pencil comes to mind. https://en.wikipedia.org/wiki/I,_Pencil
justinzollars
Thank you for this talk, this is fantastic
stopglobalism
Holy cow, this is absolutely fascinating. Figured it deserves more than just an upvote-- This speaker you linked mentions another fascinating talk within his lecture:

"1177 B.C.: When Civilization Collapsed | Eric Cline"

https://www.youtube.com/watch?v=M4LRHJlijVU

paxcoder
>most would be surprised to learn that Ancient Greece had writing for about 600 years before forgetting it. There was no writing in Greece for over 400 years, until they adopted the Phoenician alphabet around 730 BC.

[citation needed]

api
I've been programming since... well... technically since I was a kid in the 1980s and professionally since 1998. I think several areas have regressed quite a bit. The biggest one by far is desktop GUI programming.

From the late 1980s until the mid-2000s GUIs had all kinds of standardized visual cues, context sensitive help, UIs usable by both mouse and keyboard, standard interface designs across apps, data binding, and most of all WYSIWYG GUI design software that worked exceptionally well. We had this on 80286 CPUs with 1MiB of RAM and similarly tiny machines.

Today's desktop UIs are a fucking disaster on both the developer side and the user side.

For developers you have a choice between a hypertext language hacked endlessly into a UI and native UI tooling that's far less intuitive and much uglier than what we had back then. Compare the UI designer (not the language) in Visual Basic in the 1990s to today. You could not only design but data bind a complex app that looked decent in 30 minutes.

For users you have no consistency, no keyboard shortcuts (or different ones for every app), no help or help that only works online, etc.

sharemywin
I miss VB. it was really easy to use. There are tools out there but you have to pay every month for them.
pjmlp
What is preventing you to use VB.NET with WinForms? Still out there.
TheRealDunkirk
Yeah. I know VB gets a lot of derision in these parts, but Visual Studio community edition is free, and .NET is free. So it's totally free to write applications with.
kbr2000
Check Lazarus [0] for Free Pascal [1].

Another viable option would be Visual Tcl [2] for Tcl/Tk [3]. Given the event-based nature of Tcl and Tk, I find it matches well for a methodology like VB provided.

And it's far from the only one I remember (although you'll need to do some research here [4]). For example, Komodo IDE used to have a Tk GUI builder that provided for the same kind of methodology, which has been split off in [5].

Enjoy!

[0] https://www.lazarus-ide.org/

[1] https://www.freepascal.org/

[2] http://vtcl.sourceforge.net/

[3] https://www.tcl-lang.org/

[4] https://wiki.tcl-lang.org/page/GUI+Building+Tools

[5] http://spectcl.sourceforge.net/

WalterBright
Is it indeed annoying that every app isn't "cool" unless it attempts to reinvent the user interface.
darkwater
How comes that circa 2000 VB6 was the most hated and belittled programming language out there? I was there, I remember it. Now it's suddenly part of the golden age of desktop programming? I think you should put your nostalgia glasses off.
jjkaczor
The language was crap - the IDE/designer was excellent.

For me - the sweet-spot that I used for all of my own personal projects after outgrowing VB1-6 was... Borland Delphi.

The IDE/designer was at least as good as VB - but the Object Pascal language was so powerful. It was truely object-oriented and if one wanted, one could work at a high level of abstraction. Yet it could also natively drop-down to low-level Windows API's, and handle pointer-based work if necessary.

Unfortunately - for my professional career, Delphi never captured the large-scale Enterprise market - that went to .NET or Java and the rest is history...

(Occasionally I noodle about with FreePascal for the nostalgia factor)

mring33621
Visual J++ has entered the chat
pjmlp
When the option was between VB 6 or doing COM in raw C++....

Granted we still had MFC, but then the COM lovers at Microsoft started pushing for ATL, and everything that followed from there.

On the other side we had (and still have) Delphi and C++ Builder, but Borland's management killed the indie culture around them.

cabalamat
VB was good at some things (e.g. UI design) and bad at others (e.g. doing complex computation).
II2II
It's for much the same reason that BASIC (in general) was maligned and people wax nostalgic for it today. A lot of people cut their teeth on it, may that be learning how to program or embarking upon a career in programming. In other words, they have much to be thankful for.
ale42
I think that the most hated part of VB6 was more the BASIC language rather than the GUI design part... but maybe I'm wrong.
bzzzt
I think it had more to do with VB being so easy it attracted lots of inexperienced programmers who didn't care about performance or correctness as long as the job got done. Sort of like a pre-internet PHP ;)
darkwater
I also remember DLL hell and all the issues to make safely run on every Windows installation a VB6 program and its runtime. But I agree with the other commenters that the IDE and the visual part was really nice (too nice for the average skilled developer of the time, VB6 was the NodeJS of that time)
jhbadger
Because the language itself was terrible. The RAD tooling/UI designer was excellent. There was a product that had those but included a better language -- Borland's Delphi, but Borland and its successors squandered its initial success and didn't invest in improving it.
usrbinbash
That's not a problem of the craft however, it's a problem of the culture.

We are perfectly capable of writing native GUIs, and we have powerful tools for it as well. QTDesigner comes to mind as a well known example.

The problem is, about 12 years ago, application design went through a gameification and "toy-i-fication" phase, from which it has yet to recover, because suddenly, everything had to look like it was designed for tablets or gaming consoles. Then the "javascript for everythiiiing!" happened, and suddenly the tools and workflows behind all the bloated, inefficient, low-information-density apps were swept into the desktop world.

But, since the modern definition of an "App" is basically everything that is displayed on a phone ever, and devices got so powerful that no matter how badly devs f* it up it still kinda-ish works (if we ignore the battery screaming for dear life), and this situation has generally been accepted.

pjmlp
The problem is when this culture extends a couple of generations the craft gets lost.
usrbinbash
Not really, because there is always a high demand for good software. Just because cookware nowadays is usually made from cheap, industrially pressed sheet metal, doesn't mean high-quality copper and cast-iron cookware is no longer made.
jfengel
It is a culture problem, but it goes back before phones and Javascript.

Even before that, application design was usually terrible. Apps were almost universally ugly. Developers just aren't very good at it; it's not in their skill sets. They're good at making apps fast and small, but not at making them usable.

Rare companies would hire separate designers, and the apps could be functional and attractive, but they were the exception. It was a lot of money for something generally considered ancillary.

Browser-based apps get to leverage the work done by browser makers, who put in the effort to make toolkits that looked nice by default. They're not small or fast -- though Moore's Law has made them usable anyway. They also favor the things that designers like -- including not overwhelming the user with dense information. You can still use them badly, but by default any programmer can make an app that isn't awful.

There never was a golden age when developers made good, small, fast apps. It was usually a "pick two" situation, except by spending a lot of money. I'm just as happy to let my battery scream and not cringe at every single app that comes up, and so cheaply that they can give it away or cost dollars rather than tens or hundreds. Others disagree, of course, but I think the market tends to show a heavy thumb on one side of that scale.

jazzyjackson
I wonder where the notion that productivity software should be “attractive” came from. Were whole businesses not built on VisiCalc?

The registers at B&H photo in NYC appear to be some DOS terminal system, but the employees know the keyboard shortcuts by muscle memory and the interface’s reaction time is instantaneous. If that’s not good software I don’t know what is.

jfengel
I wonder where the notion that productivity software should be “attractive” came from.

From the people who make choices about where to put their money. You can get away with ugly software -- especially if you had something that worked 20 years ago and is still sufficient, and there is no alternative. But if users have the choice of something attractive, they'll pick it.

usrbinbash
That depends entirely on the use case and the user.

I have the choice of many many many text editors and IDEs to manage my source code.

What do I use? vim. In a terminal(-emulator).

Why? Because I like my editor to be ready the moment my finger leaves the ENTER key, I like direct interoperability with the terminal, I like that I can hack together even the most absurd things in .vimrc, and I like that I have the same editor with the same settings on all servers I take care of, even when I connect to them via ssh.

I also use vlc for playing audio and video. Are there players that have a more edge UX? Sure. Do they come with builtin-full support for almost all formats, have a tiny memory footprint, can be used to convert stuff, don't spy on me and can be controlled via a terminal (hello ncurses mode!)? Nope.

LargoLasskhyfv
VLC is bloated. Try mpv from https://mpv.io/
usrbinbash
Thanks, looks like an interesting project, will definitely keep it on my radar.

However, after compiling it and measuring it on an older machine (a Lenovo T430 running Arch) against the out-of-the-box vlc install, I can see no advantage in memory footprint or cpu usage, they are about the same. (I compared mvp running in terminal against vlc running with ncurses interface module)

Since mpv doesn't offer an ncurses interface, and is still in development, I will, for now, stick with vlc.

However, the scripting options definitely seem great, as I said i will keep it on my radar.

LargoLasskhyfv
Hm. Don't know how to respond to that. Let me explain how my usage began. Somewhen in the days of early Arch long before systemd...don't know how and why exactly I tried it. I mostly used mplayer before. Even with mplayer I've been wondering why I should use VLC over it, because anything I've thrown at mplayer just worked. While VLC had so many options for nothing I'd need. And it was bloated in direct comparison. Maybe that has changed meanwhile by change of toolkit, or whatever. IDK, not touching it. MPV feels like mplayer to me, with the difference of even less hassle for daily usage, and under active development, with the option of embedding it into several popular scripts to download/stream content from popular sites to watch it outside the browser/app or to archive it in the desired format and quality. Either from CLI or via several frontends. Which I have almost no use for. However, for my simple needs for local playing of downloaded stuff it just works for everything I throw at it, be it CLI, or click in some graphical filemanager. Instantly. Sometimes on even older systems than yours. With probably less RAM. Say 8GB, with something booted from USB, running 'live', so only 7GB, no SWAP. I wouldn't want to use VLC on that. If it's on the image, I remove it and exchange it for mpv during remastering.

Maybe you compiled yours with all the bells&whistles which aren't needed for sufficient local playback?

Furthermore I didn't cover (re-)streaming to chromecasts, TVs, or such. Don't have it, don't want it.

usrbinbash
> While VLC had so many options for nothing I'd need.

This has next to zero impact on its performance, as the ncurses player talks exactly to the parts of libVLC it has to.

>Instantly. Sometimes on even older systems than yours. With probably less RAM. Say 8GB, with something booted from USB, running 'live', so only 7GB, no SWAP.

The system I tested this on has 4GiB of RAM. And I have used `VLC -I ncurses` from live systems-on-a-stick as well. It runs before my finger leaves the ENTER key, same as mpv.

> Maybe you compiled yours with all the bells&whistles which aren't needed for sufficient local playback?

I'd be happy to repeat the measurement with different settings.

rowanG077
I disagree. Honestly since UI have become the primary domain of designers and "UX" experts they have regressed massively. 15 years ago you opened an application. The feel was consistent. I had a bar at the top which showed me the option in an straightforward manner which allowed to quickly explore and click through to relatively specialized things. Now, every app has it's own UI. And even worse everything is as hidden as possible in the name of being "clean".
usrbinbash
>They're good at making apps fast and small, but not at making them usable.

I don't know which apps you are talking about, but I use apps built and designed by developers every day, and they all work great.

My problems start when Apps are NOT designed by developers, but rather people who have seen 100 videos about color theory, and know all the latest fonts their social media du jour is excited about, but very little about hardware, programming, and the difference between localhost and accessing a server over cheap WiFi from somewhere else on the planet.

Because these are the "apps" which do something ridiculously simple, but somehow manage to eat up 2-3GiB of RAM and let the laptops fans spin out of control.

>They're not small or fast -- though Moore's Law has made them usable anyway.

Moores Law is over however, and there is no justification for an app that, say, plays locally stored mp3s to require 2GiB of RAM and 10% CPU. If an application thinks this is justified, it will get to know my good friend `rm -rf`, beacause I have vlc running in ncurses mode right now, playing my entire playlist, and its eating less memory than the terminal emulator it's running in ;-)

The answer to bad software, and overloaded/overused/oversold frameworks is not "built more powerful computers" but "make better software".

>There never was a golden age when developers made good, small, fast apps.

Good != Beautifully designed.

Good means small, fast, portable, reliable, easy to install, easy to learn, easy to remove, does its job.

There's also the pessimistic view that we will actually lose progress and regress over time. Jonathan Blow has an interesting talk on the subject: https://www.youtube.com/watch?v=pW-SOdj4Kkk
effingwewt
In the micro sense we see this all day every day. Restaurants start good but aim for max profit and become garbage. Apps/websites start great then keep adding cruft while removing features. Capitalism. Housing. All we do is advance to regress and repeat ourselves over and over.
kwere
its incentives and check and balances nothing more
While not incompatible with ECS, the DOM and this renderer go all-in on the javascript event-loop. You would have to write your own run loop, which executes the systems on every frame (ideally creating a DAG and executing in parallel while possible), and leave the event loop behind, with all the niceties like `onClick`, to go full ECS. Otherwise you'll create some Frankenstein monster of part ECS, part event-loop, part declarative React.

Additionally, you can throw OOP in that mix as well, because Three.js has it's own whole OOP-style framework, that you're strapping declarative React on top of with this renderer. Reminds me of Jonathan Blow's talk on the end of civilization via endless layers of abstraction[1].

I really think, when it's ready, a Bevy[2]-style system either native or compiled to WASM with WebGPU will be ideal.

And while I'm airing opinions (forgive me), I think writing shaders now is like SQL 30 years ago. Developers left optimizing difficult--according to them--SQL to database administrators by abstracting it away into ORMs. If history is any indicator, I think we'll be having the same arguments on Hacker News 30 years from now about 3D frameworks vs writing shaders directly as we're having now about ORMs vs writing SQL directly.

[1] https://www.youtube.com/watch?v=pW-SOdj4Kkk

[2] https://bevyengine.org/

Jonathan Blow - Preventing the Collapse of Civilization https://www.youtube.com/watch?v=pW-SOdj4Kkk
indeed, history repeating. There's a nice talk by Jonathan Blow how technology declines and has to be re-invented over and over again: https://youtu.be/pW-SOdj4Kkk
Whenever I read about LSP I'm reminded of Jonathan Blow's talk Preventing the Collapse of Civilization where he talks about LSP and how it turns your single app into a fragile distributed system[1].

The sheer complexity of going from vi, to vim (and plethora of plugins), to vim + LSP is simply insane to me. It does feel like we've all lost our minds.

[1] https://www.youtube.com/watch?v=pW-SOdj4Kkk&t=2546s

kzrdude
He's definitely got a point. I had some memory problem with a business calculation I was running, I needed all the memory I could get.

So I looked at the various memory hogs on the linux installation and could see that every Vim invocation had a node process hanging off of it as a child, that's the Coc code completion/language server client running. (Action => only trigger coc for certain filetypes)

And I had vscodium running too, and it had its own long lived subprocesses hanging off of it, for language support, etc.

A lot of apps are like this - humongous clusters of processes (firefox, teams, vscode, and also Vim with the right plugins...)

washtubs
I didn't watch the whole talk just the part you linked. I'm sure he makes some good points. But yeah, this guy does not understand why we have lsp's at all.

An LSP is a program that parses and internalizes a project written in a particular language and serves information, diagnostics, and edits, to a generic editor. The protocol is broad enough to serve all the "smart" language-aware functions provided by full-blown IDEs.

This allows one LSP to be used in ... many different editors. The advantage to the users is obvious: if go has one standard LSP, people who use neovim, vscode, vim, emacs etc all have an interest in maintaining that one LSP and will contribute to it in various ways.

Let me give you a few reasons why not only is it fine that it's a separate process, but you want it to be in a separate process.

1. LSPs will be better written if they themselves are written using runtime and the language that they serve.

2. LSPs can potentially hold a lot of memory. Sometimes you need to manage them, and potentially even cut them off, for example if you have a few very large java projects that you're switching between. Generally if they are separate processes, you can just kill them without affecting the editor. Additionally this also means the editor itself doesn't risk a memory leak caused by a rouge LSP server.

3. Subprocess management is not that hard. The editors can do it. Neovim does it pretty well in my experience. The presenter acts as if the server is some totally separate thing that you have to manage yourself. In reality the language server process is launched, managed, and owned by the editor, and often just communicates over stdin and stdout, not that there's anything wrong with ports.

Using multiple processes to distribute work among programs that do one thing well has always been the UNIX way.

>Nostalgia for the simplicity of the past ends up having ugly cultural implications. It's easy to say "let's go back to the time when things were simple"; it's a lot harder to say "folks in (e.g.) Israel shouldn't be able to type in their native language".

This is a ridiculous mischaracterization of what Jonathan Blow was talking about.

Here is the full presentation. Very much worth watching in its entirety and thinking about:

https://www.youtube.com/watch?v=pW-SOdj4Kkk

peoplefromibiza
upvote and a giant thanks!

this video should be mandatory in every school on the Planet.

slmjkdbtl
> This is a ridiculous mischaracterization of what Jonathan Blow was talking about.

I watched the full presentation and agree with almost everything he said, but I can't find a part matching the conflict between "Simple text editor Vs. Complicated character encoding scheme / rendering / formatting". How would JB make a text editor that works with any kind of character?

alistairw
I agree with the sentiment you're getting at about nostalgia for the past sometimes ignores the homogeneity of the people using those systems and how that reduced complexity. I do think there's an argument there that needs to be explored further to identify what is actually simple and hard/complex in computing.

However, I think this point about how JB specifically would make one with any type of character is not the best direction for exploring that. He's made games and his own presentation software (Used in the presentation linked above) which is able to do that. I doubt he would classify that as one of the actually hard aspects of computing.

slmjkdbtl
Yeah I agree making an editor that supports any type of character is not the best way to explore this subject, but I doubt the result of this exploration can solve that editor problem.

I follow a similar mindset of JB and have being making / using my own tools, even my personal character encoding (I have Chinese as my first language and want to use Chinese in my works but Unicode is too messy and it's impossible to make a font that supports thousands of characters so been developing my subset of Chinese that fits in a byte), also my own game engine / renderers / programming language (similar to JB's stack) My experience is these softwares are too personal and not applicable the the real general public. JB also makes personal softwares and won't consider about generality. Personal softwares are inherently better, because the bloat and bad about current state almost all comes from derailed generalization effort. If we take the beautiful DOOM editor and make it general, it'll be real hard to not make it another Unity, and that's the real under-explored problem.

asiachick
I don't agree with almost anything he said. He doesn't know what he's talking about. He comes from games. I too came from games. I had all the same opinions as him. Then I worked on a browser on a team with hundreds programmers (browsers are huge). The article is a perfect example. He hasn't solved the actual problems being solved so he has no clue why the solutions used are the ones that were chosen.

Could some parts be re-written to be more efficient? Maybe. Could they be made efficient and still be easy for any of those hundreds of programmers to follow and modify? Probably not, even if all of them were of his caliber.

Games just don't have to deal with the same issues. Take Unicode fonts. Just the emoji font is 221meg! I'm pretty sure if you ask JBlo about it he'll give some flippant and asian bashing "emoji shouldn't exist in the first answer". He won't actually "solve the problem that's actually being solved", he'll solve some simpler problem in a simpler way in his head and then declare that's all you need.

He's made all kinds of ridiculous and conflicting claims. Example, he believes there should be only one graphics API. To put it another way, he believes in design by committee since that's the only way there would ever be one API. Yet in other areas he rejects design by committee (game engines would be one).

Another issue is security. AFAICT he's never given it a second thought. As one example his JAI language he pointed out he never runs into memory ownership issues so he doesn't want a language that helps with that. Memory ownership issues is one of the top ways security bugs appear. Again, pointing out he doesn't know what problems are actually being solved and is thinking only from his own limited experience.

cztomsik
agreed, I respect what he did but some claims are ridiculous and his "clan" of followers only makes it worse.

BTW: would you be willing to chat from time to time? Im doing something similar (hobby, mostly one-man show so far) to browser and I would really could use some help. I dont need any programming but pointing me in the right direction would be awesome.

May 06, 2021 · d_burfoot on Crazy New Ideas
I follow Blow's work and opinions quite closely. I am quite confident he was NOT criticizing Mighty specifically. Instead he was deploring the state of software engineering in general and web programming in particular. He is saying something like "I can't believe web engineering sucks so bad that a tool like Mighty actually makes sense". See his talk about preventing the end of civilization (!!):

https://www.youtube.com/watch?v=pW-SOdj4Kkk

lapnitnelav
Fascinating video as always from Jonathan.

Definitely highlights something I've always felt, which is that we're really making life complicated with all those tools and processes that are less than ideal.

I get the feeling business and the need to "ship" stuff is the thing really bringing technology down, even though it looks like it's moving it forward.

May 03, 2021 · mrspeaker on How Tech Loses Out
Interesting talk by Jonathan Blow based on a similar (though software-focused) premise: "Preventing the Collapse of Civilization" https://www.youtube.com/watch?v=pW-SOdj4Kkk
Jonathan Blow has an excellent talk on this topic in relation to software.

https://youtu.be/pW-SOdj4Kkk

This talk deals with the issue you are describing as well as the underlying reason for it: "Preventing the Collapse of Civilization" by Jonathan Blow https://youtu.be/pW-SOdj4Kkk

It's very informative and engaging. Highly recommended.

Thank you for your comment and input from this direction, super interesting!

Reminded me a bit of Jon Blow's excellent talk here: https://www.youtube.com/watch?v=pW-SOdj4Kkk

Do you have any resources you can recommend on this topic?

Interesting. It's probably better for the human-being as a whole: having a backup just in case one is f*cked up. Highly recommend the talk by Johnathon Blow: https://www.youtube.com/watch?v=pW-SOdj4Kkk
Jan 15, 2021 · rozab on Lycurgus Cup
This is from the Jonathon Blow talk currently on the front page:

https://www.youtube.com/watch?v=pW-SOdj4Kkk

Mageek
The cup's context in the talk is that J Blow is discussing how technology is not guaranteed to increase over time, but that we regress as well. The Lycurgus Cup was created in Roman times and showed extremely high levels of technology and skill, to a degree that could only have been achieved by repeated refinement. This technology was then lost.
Jan 15, 2021 · 139 points, 94 comments · submitted by keyle
arexxbifs
He's absolutely got some good points, no doubt about that. And yet, something irks me...

* An industry that can't produce robust software has produced several server operating systems capable of uptimes that far exceed the expected life time of the hardware they run on. [0] Writing this, I checked in on a random server I've got access to. The uptime is 350 days.

* I reboot my personal Linux laptop only when I myself want to install the latest kernel update. The current uptime is 46 days and I use that machine for programming, watching movies, surfing and whatever else I feel like doing.

* I'm willing to bet a few bucks that there are more C and assembly language programmers around now than there has been at any previous point in time.

* I'm pretty sure Blow couldn't have made his presentation slides, recorded his talk, uploaded it to a web site and then watched it in a web browser using Ken Thompson's three week Unix, even if it was running on modern hardware.

* It's true that the OS in both abstract and concrete ways removes capabilities from the CPU, but it also adds a lot of capabilities such as multitasking.

* Pretty much all of the things listed on his "You Can't Just..." are perfectly doable in E.G. Linux. It's not the OS that removes this capability, it's the work of companies trying to protect their revenue (through DRM, IP laws, patent enforcement etc.). I can (and still do) write software in a simple editor without a language server in sight, etc.

Meanwhile, my old Amigas - which I love to bits - crash fatally all the time. Just a few weeks ago, one of them pulled a hard drive partition along with the crash and I actually had to reformat it because re-validation failed (luckily I had nothing of importance on it).

[0] https://arstechnica.com/information-technology/2013/03/epic-...

cblconfederate
> An industry that can't produce robust software

Was it linux? Not exactly a product of industry.

ziaddotcom
In the example of your Amiga, your hard drive failed, but it probably would still boot right into any game from any floppy that didn't require workbench to have been booted from floppy or HD. The game could very well have been written in workbench booted from workbench/assembler from floppy.

The same game, replicated in Unity, requires a several gigabyte download just for the developer to light up a white pixel on a black background in Unity.

Is the juice of Unity worth the squeeze? It depends on the game I think. Using Unity to get a Super Meat Boy clone on a Nintendo switch starts to ride the line of absurdity (i.e. using a massively complex and capable game engine to make a knock off of a beefed up version of a flash game that was paying homage to games written in 6502 assembly and booted instantly and never crashed).

meheleventyone
That's not really a like for like comparison though. You could for example light up a white pixel on a black background in the browser, which almost everyone has installed by default, or using the basic OS APIs or by downloading PICO-8 which is pretty small IIRC.

There are tradeoffs being made in all three choices and each will suit a different context.

ziaddotcom
Does anyone seriously write games for the browser anymore? Can Jonathan target the browser to make Braid, and seriously expect any commercial success?

Maybe you'll think "What about Celeste?" at which point you'll have only proven both of our points equally at best.

meheleventyone
Yes they do. There are also plenty of games written on a web stack and shipped as executables.
ziaddotcom
And how is that meaningfully different or better than using unity to ship an executable?

I thought your whole point was that everyone already has a stable browser, and shipping games as some bare metal or in some half bare metal form is really the developers imposition if the game could have just as well run in a browser.

meheleventyone
No my whole point was that you're comparison was silly because there are lots of ways to accomplish what you said without downloading gigabytes or using Unity. I listed two other ways that weren't the browser and said that there were contextual trade-offs in what you'd choose to use.

You've decided to take one of those options and attribute a bunch of arguments to me I've never ever uttered.

ziaddotcom
Fair enough. I think you've called me out reasonably there.

I still don't think you've addressed my core point, and it's quite probable I wasn't clear enough to make that possible.

My white pixel example was referencing the talk, where I interpreted Jonathan saying almost verbatim about the barriers to just lighting up a single pixel on a modern computer. I made the assumption that everything he opines on publicly is in the context of him making games.

Is this whole stack overflow thread just a bunch of masochistic programmers, or is it needly complex to achieve consensus on how to plot a pixel in a browser canvas? I'll take responsibility for begging the question in jest. https://stackoverflow.com/questions/4899799/whats-the-best-w...

If one wants to start the game they are going to make in the same engine the are going to finish making it in, for most people, unity is going to be worth the gigabyte download. If you just want to teach kids html/css, the server stack in python, and how to change the color of one pixel in the browser is lesson one day one on a raspberry pi, fantastic. If one wants to write a mario clone and ship it in electron, bravo.

meheleventyone
Plotting a single pixel is amusingly difficult in general particularly in the age of 3D acceleration. It was also not necessarily a picnic in the past either as there often wasn't space for a fully addressable frame buffer.

I actually did this whilst playing about with Zig a while ago. I made an offscreen buffer, rendered pixels to a frame buffer array in WASM and then copied them into the offscreen buffer. Not going to win speed awards but it worked, was fast enough and was super simple. Also it meant I could scale the canvas to scale my image. This was for a very small screen representing windows in a building. My proof of concept was approximately 30 lines of JS and the same again in Zig. Most of the latter is type definitions.

Although given that Canvas is supposed to be a friendly abstraction it definitely feels like a weird oversight. Personally for pixel sized things I love PICO-8.

pharke
There are still people developing efficient and capable game engines. Godot[0] is a good example of one that has achieved widespread success. It's a few dozen megabytes in size, does 2d and 3d, supports VR, and can target Windows, MacOS, Linux, Android, iOS and HTML5/WebAssembly. It's open source and free as in beer too.

[0] https://godotengine.org/

ziaddotcom
I think Godot has only recently become mature enough to be remotely comparable with unity etc.

From the Godot download page:

"Godot is currently not code-signed for macOS. See the last section of this page for instructions on allowing Godot to run anyway. Alternatively, you can install Godot from Steam to work around this." So what does that leave, Windows and Linux on x86-64?

Isn't this the kind unjustified hoops that Jonathan is criticizing, that people criticize Apple for not making it viable for a clearly legitimate OSS project to have to deal with? It definitely isn't something that would make me confident recommending a middle school game programming class to depend on working one semester to the next.

arexxbifs
> In the example of your Amiga, your hard drive failed

The crucial part here is that software caused the hard drive to fail. That was my point - I think it's easy to forget how fragile most home computer OS:es were 25-30 years ago compared to today.

ziaddotcom
Bad software and bad UI corrupts usb drives all day long in 2021.
arexxbifs
Yes, bad things can still happen. What I'm trying to say is bad things used to happen, too, quite frequently and on very simple systems, and that his points about gradually diminishing robustness and productivity are easy to refute. I'm of the opinion that computers today - despite layers of abstraction and growing complexity - are more robust and lets us be more productive than ever.

Abstraction fosters ignorance and complexity is fragile, I agree with him on those points. But I also think he makes very sweeping, erroneous statements about software and development.

segfaultbuserr
> In the example of your Amiga, your hard drive failed, but it probably would still boot right into any game from any floppy [...] The same game, replicated in Unity, requires a several gigabyte

You have a good point to make, but the argument you've employed is bad. When you said,

> In the days of the Amiga or the Archimedes, it was quite possible to boot entire app or os/app combo and just leave the computer on indefinitely, without it crashing itself even if untouched.

You've mixed up reliability with understandably & maintainability. In an uptime competition, a retrocomputer will lose the game to many modern computers, it's almost guaranteed. But it's not the point. The point is what happens after it's down. If it's a retrocomputer, it's possible for a single person to understand every aspect of the computer and to perform troubleshooting down to the component level. It's also easy for a single hobbyist to build one's own computer using the 6502 chip. But it's usually impractical or impossible on modern hardware. My personal understanding on the thesis of the talk is that modern computers have much less understandably & maintainability than retrocomputers, and this, in the long term, makes the civilization as a whole, less reliable. It's not a question of how durable we can make a single computer to be.

You can make a better argument if you stop saying how reliable retrocomputers are and focusing on understandably and complexity.

ziaddotcom
I don't make the argument that they were more reliable now or then than today's PC, they almost weren't and aren't. The problem is how we got more reliable pcs today is in the context of people who had access to both. Some generation in the future won't have access to living human beings for the day the first electronic computer came into existence as we ourselves have access to now.

Notice I said "even if left untouched." Perhaps that was too charitable, as these retropcs, without memory protection, would often crash if metaphorically "touched." It would happen often with productivity apps, much less so with games. As time has gone on with pcs, this seems to have flipped, a fair trade for the time being.

segfaultbuserr
Hmm, so your argument was not reliability of retrocomputers in paticular, but more generally, "how the hardware worked" (and some just happened to be reliable enough for many people), and all these things shaped the understanding of the next generation of innovators who invented our PCs. Thus it's important to preserve the hardware as a key of understanding how we get here from there, because there will be no living witness for the future generation - and this is your actual point.

I think I now understand your argument now.

ziaddotcom
Yes you grokked my point quite well. I wrote a long rant about how I've never had access to Amiga hardware, but I think I owe your pithiness some pithiness in return.
emsy
When asked about his opinion of the Apple M1 (which is heavily discussed recently on HN), Blow said it doesn’t matter in the long term, since software will eventually eat up the performance improvements. It’s easy to get defensive about this argument since us programmers are the ones causing the problem. But as a user, I don’t want to need a 3 GHz machine to run a text editor somewhat responsively.
betareduce
Fabien Senglard made a similar point [0] [0] https://fabiensanglard.net/silicone/index.html
fnord123
Hardware performance is victim to Jevon's paradox. The M1 will result in more profligacy of cycles. Only battery life seems to have a chance of pushing performance the other way.
alpaca128
> I don’t want to need a 3 GHz machine to run a text editor

This annoys me as well. Visual Studio Code is about changing individual bytes in tiny plain text files. I don't think it has a single feature that didn't exist before. Yet its startup time is somehow closer to that of a full-fledged IDE, almost as long as the whole operating system needs to boot. Same with websites; a freshly configured email client can download a thousand messages from the server in a shorter period than it takes to load most news articles in the browser.

And the most baffling thing to me is that such things are so widely accepted. I've seen many users shrugging it off with a "that's just how computers are" mindset; many things are implemented so badly people don't even expect them to work properly.

This may sound a lot like "old man yells at cloud" material, but in the end it can't be denied performance is often at least an order of magnitude away from where it could be - my current laptop has 32 times as much RAM as my first PC, and it still swaps just as frequently.

UncleMeat
> Visual Studio Code is about changing individual bytes in tiny plain text files.

Only if you are being amazingly reductive. VSCode is about changing desirable sequences of bytes into more desirable sequences of bytes, where the definition of "desirable" is stupendously complex and depends on state that exists on many other computers around the world. If you just want to change bytes you can open notepad or nano and watch as they boot instantly. VSCode is much closer to Eclipse than nano in its capabilities.

> And the most baffling thing to me is that such things are so widely accepted.

To me I think about things like screen refresh rate or keyboard latency. Some people demand 120hz or incredibly low keyboard latency. But I'd wager that the percentage of people who care about such things are a rounding error. Even among hardcore gaming enthusiasts, optimal fps has only doubled over the past three decades. So there is way more real time to do stuff, and developers fill that time! They might fill it with features or they might fill it with lower development costs.

emsy
It’s true that VSCode is more than a simple byte changing tool. But the responsiveness is still unacceptable. A game reacts to my input virtually immediately while doing much more. Also, I have yet to see a proof of the increase in development efficiency. Personally, I think we have lowered the bar of entry more than we have increased developer efficiency, it’s follows that we see a decline in software quality.
alpaca128
> VSCode is much closer to Eclipse than nano in its capabilities.

What exactly can it do that Vim or Emacs can't? Vim starts up literally 100 times faster on my machine, with 17 active plugins. It also runs everything in a single thread and is still more responsive than VSC in almost all scenarios.

What are those stupendously complex features that somehow make a slow start unavoidable? Or to go with your comparison, why do you think Eclipse has a good reason to start so slowly, on a modern 8+ thread CPU and SSD?

fuyu
I very much prefer using Vim, but the debugging experience is much better in VSCode. The language server experience in VSCode is also better (though vim is getting there!), and there are many other small trade-offs that one may prefer.

Emacs I haven't tried nearly as much, but it has a poor Windows OS experience, and is also historically a "bloated" software ("Eight Megabytes and Constantly Swapping").

Of course there is no reason Vim couldn't be modified to do everything VSCode does but better, but until that happens VSCode will still have a place in my toolbox.

AnIdiotOnTheNet
> And the most baffling thing to me is that such things are so widely accepted. I've seen many users shrugging it off with a "that's just how computers are" mindset; many things are implemented so badly people don't even expect them to work properly.

To me, this is the most damning evidence that our industry sucks at its job. We're so bad at making our products that users expect them to be slow and broken all the time.

triska
This is one of the most insightful talks I have ever seen on Youtube. I highly recommend it!

What makes it stand out particularly is the careful collection and articulation of concrete examples and historical references.

UncleMeat
> What makes it stand out particularly is the careful collection and articulation of concrete examples and historical references.

Unfortunately, it is a really shallow analysis of this topic. He'd do well to speak to experts in history of technology/science when drawing such huge conclusions about humanity. This is just enough history to get people into trouble.

ziaddotcom
I agree, and I think this talk and HN post make a good compliment to the Retrocomputing article and discussion on HN a few days ago.

https://news.ycombinator.com/item?id=25714719

Jonathan does a better job than I did explaining the premise of how retrocomputing isn't just a means to nostalgia, but a means to make sure and verifying we've improved personal computer hardware/software since the initial boom. Emulators don't give us the insight into the whole stack or understanding on the true reliability of these old systems.

As Jonathan says, if our laptops today aren't capable of 99.999 percent uptime, then none of the software runnning on it can be either.

In the days of the Amiga or the Archimedes, it was quite possible to boot entire app or os/app combo and just leave the computer on indefinitely, without it crashing itself even if untouched.

rimher
He makes some good points in the talk, but omg what a troll he is. Also on Twitter he's been a jerk to so many people, I really don't understand. If he's looking for what's wrong with software companies, he's part of the problem with his attitude honestly
CodeGlitch
He says it like it is - and has a critical mind which I suppose makes him come across as confrontational. A troll on the other hand goes out of their way to insult & shock, which I don't believe he does.
dm319
Just to expand on your definition of a troll, as someone who grew up in the 90s and met a lot of them on usenet. The common theme I found with them was being able to carefully phrase a comment or question to maximally elicit a response.

They often weren't consistent in their arguments over time - instead crafting their argument to most elicit an emotional response in the reader so that they were compelled to reply.

jb1991
I’ve noticed this as well. He is a very arrogant fellow and has no problem insulting people who do not deserve it.
AnIdiotOnTheNet
I've watched a lot of Jon's streams and stuff and I think "troll" is a mischaracterization. Trolls get a kick from getting people riled up, which definitely isn't the case with Jon.

He is often quite blunt though. He doesn't usually go out of his way to say things nicely and, being somewhat in the same boat, I think this is mostly because he's just sick of seeing the same bullshit over and over. It is frustrating.

ivanbakel
Previous discussion: https://news.ycombinator.com/item?id=19945452
mikewarot
At 22:00-ish, he claims software is riding on fast hardware, let's see if he's right.

Here is a challenge... can you join the 1 MIPS club?

Can YOU do useful work with 1,000,000 Instructions per second? A VAX 11/780 was enough compute power for a small academic department in the 1980s, and would have dozens of Terminals attached, with people doing real work. It was almost exactly 1 MIPS. Can you live within that constraint?

Let's make it easy, and assume I/O is as fast as modern hardware, and terminal/keyboard management isn't part of your 1 MIPS. (But SHOULD be counted, somehow)

Could you constrain yourself to fit in that box, yet still get work done?

Pick your favorite CPU, give yourself as much memory as you think is necessary, and chose how much disk space you think is enough. Then try to work within that box.

How would a web server in that environment fair?

meheleventyone
Didn't someone run a web server on a ZX81?
aYsY4dDQ2NrcNzA
Well, kinda.

https://hackaday.com/2016/11/23/zx81-connects-to-the-network...

arexxbifs
The thing is, our expectations on software has changed drastically since then [0].

With that said, I frequently do perfectly usable stuff on a 0.6 MIPS Amiga, in high level languages no less. I've even written a web server in ARexx, which is interpreted. Granted, it won't serve a whole lot of simultaneous users on an Amiga 500, but it works perfectly fine and even runs basic CGI stuff.

[0] https://datagubbe.se/small_efficient/

nukst
What a great coincidence, I watched it yesterday because of that Plan9 post.

I couldn't recommend it more.

dm319
He has a really good point about civilisation. The collapse of civilisation is the norm - and the Egyptians and Ancient Greece/Rome were highly advanced - it is arrogance to assume that this cannot happen to ours.

He talks about simplification as a solution. I buy that - but he doesn't talk about education.

dmortin
I had time only to skim the video, but isn't it a responsibility of companies to hire people to learn and work on low level stuff?

If they pay those people and pay them well then there will be people working on those areas. If they only pay for software plumbers then obviously they will get more plumbers.

crispyambulance
> If they only pay for software plumbers then obviously they will get more plumbers.

They pay for plumbers(+). Seriously. Even intensely creative and curious folks must pretend to be plumbers in their workplaces to appease project management derps. The stuff that makes one have job satisfaction often gets done without asking for permission and without allocated time/staff/budget.

(+) Nothing against plumbers. Many have more lucrative and intellectually stimulating jobs than software engineers.

grenoire
This talk also ties in very strongly, albeit much more seriously, with the movie Idiocracy. I know, that gets brought up quite often, but the prospect of us 'forgetting' how to build and maintain the foundations of modern artifacts seems not so far-fetched anymore.
pygy_
Crucial knowledge is gradually embedded in/embodied by industrial capital.

In parallel, we’ve been defunding schools and the healthcare system (in Europe, at least), because people just aren’t worth the expense.

Education professionals in their late 30s tell me that the attention span and abstract thinking abilities of children of all ages have drastically declined over the last 10 years. Screen time (including the parents’) is the most likely culprit.

We’re not scaremongering, this is really happening...

alexashka
I'd need to be convinced anyone ever really knew how to keep a technologically advanced society from an inevitable collapse.

People are by and large idiots who don't even bother to ask the question 'wait, how does any of this work?' It doesn't blow their mind that they don't know how anything they enjoy works. What blows their mind is whatever someone entertains or outrages them with and what entertains or outrages has been outsourced to some idiot algorithm that's optimized to do, uh, nobody knows, because we're a bunch of idiots who never asked about the consequences of letting idiot algorithms dictate culture, we just let companies do it.

If it's profitable, it must be good, because what's good for me is more money and what's good for me is all I care about. That's the mindset our species has accomplished with all of its 'education'.

Oh well, it's not like it was ever any better - people used to believe in a God or witches or Zeus - bunch of idiots then, bunch of idiots now :)

azeirah
It sounds similar in concept to how we have forgotten how to live in native environments. How many people here know how to survive in the tropical jungle without any modern technology for a month?

My guess would be less than 1/100000 of hn readers

emsy
It also happened several times in history, so it’s not without precedent.
b0rsuk
Also check out the book (not the movie) "Non-stop" by Brian Aldiss. The first 2/3 of the book is meh - uninteresting characters, motivations, a bit above average setting. Then it really takes off and you see the writer hasn't been wasting your time.

Also The Book of the New Sun by Gene Wolfe. It's much slower paced and more about the mood and immersion.

keyle
Yes and it reminds me of a recurring plot in sci-fi...

Visiting a civilization of extremely advanced beings who... 'maintain' a big machine that does everything for them; it solves safety, hunger etc. and let them lead a life mostly hedonistic.

Only very few know how to maintain the said machine... They themselves don't even understand how their ancestors built it; and if they were to disappear or the machine to be destroyed, the civilization falls as it has forgotten everything about keeping itself afloat.

In the modern world, most 'senior developers' are glorified plumbers.

It works, they don't know why.

So when it doesn't work, they don't know why.

We're becoming more maintainers and plumbers than creators at the cost of not understanding the sand under the castle.

2pEXgD0fZ5cF
One aspect I notice in tech and programming circles is that there is often a certain amount of people actively discouraging low level projects, even if they are created out of the desire to learn. I'm sure some will be able to relate to the "Why are you working on that? X already exists, just use X!" comments and sentiments that are easily encountered.
fogihujy
I'm such a plumber.

I occasionally work on maintaining an ancient server environment running some custom code that can't run on modern OS's because <reason>. Over the years, more and more duct tape has been added, and there's little to no documentation of how everything is set up. The only remaining staff is me, who simply worked for a few years alongside the people who took over from the people who originally built everything.

Nobody wants to pay to have some really dig into the system, documenting everything, and bringing it up to date, because there's a replacement system on the way (any year now). Instead, they call me every now and then to get everything rebooted.

I don't mind the extra business, but if something really bad happens then that client is toast and there's not a soul who can fix it.

Worst thing is I hear this kind of thing is frighteningly common.

yetihehe
It's also because our systems are more and more complicated. This is neatly pointed out in "Who knows how to make a computer mouse?" [0] TED Talk.

[0] https://www.youtube.com/watch?v=DTLizne1uNw

keyle
Agreed, and because a lot of the existing tech underneath has simply been agreed upon as "good enough".
segfaultbuserr
One Ken Shirriff's comment on reverse engineering CPUs makes a good point on system complexity.

> For future reverse-engineering, I think the biggest problem will be imaging. I can study processors up to about 1980 with an inexpensive microscope. But chips with two layers (or more) of metal are much harder to examine, because the metal layers block each other. And once transistors get too small for optical microscopes (1990s), you need an electron microscope, which is a much bigger investment. (Although there are hobbyists who have one.)

> The other factor is understanding. A chip like the Z-80 is simple enough that one person can understand it completely. I doubt any single person at Intel understands everything about the Xeon, since it is so complex. And it's much harder to understand the chip from reverse engineering.

segfaultbuserr
Read this true story, "Institutional Memory and Reverse Smuggling" [0] - once upon a time, a petrochemical factory was built and later essentially forgotten, becoming nothing but a money-making asset on the company's balance sheet. Decades later, the plant is still operating and being maintained. One day, the company suddenly remembered it and wanted to update it to make more money. But at this point, nobody knows how the whole factory worked, why it was built that way, or how it was constructed...

> Institutional memory grows hazy at this point. The alien machinery hums along, producing polymers. The company knows how to service it, but isn't quite sure what arcane magic was employed in its construction. In fact nobody is even sure how to start investigating.

> It falls to some of the then-younger engineers, now the senior cohort, to dig up documentation. This is less like institutional memory and more like institutional archaeology. Nobody has any idea what documentation exists on this plant, if any, and if it exists, where it is, or what form it might take. It was designed by a group that no longer exists, in a company that has since merged, in an office that has been closed, using non-digital methods that are no longer employed.

[0] https://lemming-articlestash.blogspot.com/2011/12/institutio...

erikbye
It is false that we as an industry cannot make robust software. We have made lots of software I would classify robust. E.g., in aerospace (yes, there are always exceptions), defense, and automotive.

My washer's, dryer's, and oven's software has proven quite reliable, too.

Jellyspice
I don't understand the last part of the talk, how does simplification going to save our knowledge? Do you really need to simplify things? What about a training wheel environment where everyone builds their system from the ground up?
mikewarot
There are too many layers of abstraction. Each layer has real costs, even if you can't quantify them right now. The whole idea of signing code, for example... is just a response to the weakness of operating systems, not an actual need to run any given program.
fallingfrog
This argument, although I couldn’t articulate it as well, is why I think we should keep building particle accelerators even though the next one might not find anything new.
lcall
Going out on a limb here I know, but I think among the very most important things we can do are to be good to our families, and continuing to practice at honesty and treating others the way they would want to be treated. Those things seem essential for maintaining anything, long-term. (cf. the "Anna Karenina principle" ( https://en.wikipedia.org/wiki/Anna_karenina_principle ), for families or cultures that last across many generations.)

(Edit: to clarify that: Tolstoy said something like "All happy families are alike, and all miserable families are miserable in their own way". I think there is much to that: multigenerational unselfish service to others, in a widening circle, brings greater sustainability and peace. Seeing what families' and/or cultures' traits allow them to persist over time is interesting.)

From the political side of this (also relevant), things might seem distressing now, but it can be OK. Some more thoughts:

1) Honesty, the US Constitution, and the rule of law seem much more important than other policies, even ones we care about deeply (per some of my Church's scriptures, such as D&C98). Let's pray for our country/ies!

2) Jesus Christ said "love your enemies", & more ( https://churchofjesuschrist.org/study/scriptures/nt/matt/5.3... [churchofjesuschrist.org] ). (And: "By their fruits ye shall know them": Matt 7:11-21; and see D&C52: https://churchofjesuschrist.org/study/scriptures/dc-testamen... [churchofjesuschrist.org] .) This means treating others with respect, for starters, and the way you would want (or they would want) to be treated.

3) Please, along with seeking wisdom & kindness, let's try to get info and news from reliable, trustworthy sources. (Many; refs avail.)

4) Trust is earned by (trustworthy, good, uplifting) behavior over time, not just promises & piles of words.

5) We all can learn, help & encourage each other, grow & be better if we keep trying.

I deeply believe, with some reasons, that the US Constitution and some other good things will continue to be in place in the long run, in spite of difficulties.

jessriedel
Lots good and not so good about this talk. One error: he attributes lots of new abilities mostly to increased hardware speed, but, depending on how you measure, we seem to have seen roughly equal gains in software speed for basic problems. I think this would have been more obvious/avoided if he tried to be more quantitative with his claims.

https://www.overcomingbias.com/2013/06/why-does-hardware-gro...

rado
npm i prevent-collapse
indy
That will only work if you run it in a docker container managed by kubernetes. Luckily you'll be able to configure it through an Electron app. As I always say: "Who cares if it's simple as long as it's easy, besides hardware is powerful and we have lots of memory and disk space!"
jdjfjtkfkf
When I first saw this talk I thought he was exaggerating, but seeing the utter incompetence of the US pandemic response makes me think he is on to something.

And the political polarization in the country makes me wonder if US forgot how to do democracy.

jansan
Stopped after half of the talk, when he was talking about some Epic games. I thought this was broader, but his horizon seems to be the nerd and gamer world.
CodeGlitch
Well he does work in that industry so obviously he's going to use references from the subjects he knows most about.

Nothing wrong with gaming as long as you play in moderation :)

altschuler
There is only little game-specific in this talk. The Epic store was an example (out of many) of software failing on a daily basis.
Zhyl
To state the thesis of the talk:

* Knowledge is ephemeral. Even recording knowledge doesn't imply or guarantee that it will be passed down the generations entire. In fact, throughout history there are times when humankind 'forgot' how to do certain things.

* This applies to software. Software is getting so much more complex (as is hardware) that the knowledge for how to do the intricate or demanding things isn't held by many people, which makes it brittle. Adding layers of complexity on top of each other has created a tower of abstraction that people usually only know a part of, which makes the entire stack brittle if not enough people know every part of it and something breaks (maybe in a hundred years, but if that truly happens, there will be no way to fix it).

Jonathan Blow makes some recommendations about how to avert what he calls 'the collapse of civilisation'. To summarise:

* Simplify tools and processes. Reduce complexity, reduce dependencies.

* Get a better, more intuitive understanding of your technologies and tools at a lower level.

He doesn't explicitly solutionise this, but I would argue that his game 'The Witness' discusses this at varying levels of detail. The gameplay and puzzles demonstrate breaking up knowledge into small composable chunks which can be taught intuitively and wordlessly. The additional materials more explicitly discuss knowledge, learning, understanding and (more generally) truth seeking. If one wanted to avoid the collapse of civilisation, as outlined by the talk, one should play this game and contemplate its message deeply.

UncleMeat
> In fact, throughout history there are times when humankind 'forgot' how to do certain things.

I wish, just once, that technology people would actually consult historians of science when making these sorts of claims. There is an entire community of people who study this topic for their entire lives and write books about it.

Yet myths persist. So many of the examples of "civilization forgot" are actually just false or at the very least the conclusions drawn about society from these stories are highly misleading.

Zhyl
Would you be able to comment on the examples from the talk?
UncleMeat
Blow just says "then the roman empire fell and that knowledge was lost" when discussing the Lycurgus Cup. He just states this. No citation or investigation. This is the classic oversimplification of both technological record keeping and the nature of the collapse of rome. There is no clear evidence that the "fall of rome" (which means about a dozen different things that happened over 1000 years) is responsible for the mechanism for creating such glass being unrecorded.

He makes the same claim about alchemists fire. Again, ill supported.

There are a tremendous number of reasons why ancient technology might not have surviving records to today. Amateur historians love to just assume this is due to "civilizational collapse."

He also seemingly inverts things. Is civilizational collapse the cause of loss of technology? Or is it the opposite? Does technological de-sophistication cause civilizational collapse? He seems to argue both in his talk.

mechEpleb
Knowledge on how to make things is ephemeral, is constantly lost and has to be rediscovered even in the modern world. People whose day job is playing meaningless language games don't have any appreciation of how much idiosyncracy goes into doing even simple technical tasks.
Zhyl
I think that's a fair statement in isolation, and I do find Blow's grandiose and loose use of language to get in the way of his message, but I believe the point stands. The knowledge was lost and that is the part that is most relevant here. The factors that caused it, be it the the 'collapse of an empire' or just one illiterate glassblower's creation not being documented before his death (or any other of the tremendous number of reasons) are less material than the fact that we have a chaotic system where we're trying to stop entropy.

To that end, I don't think he needed to consult a historian in order to make his point any more valid, but he could probably have done with a historian proofreader to tidy up any flowery language that might detract from the overall message.

rozab
Which of Blow's many examples do you take issue with?
UncleMeat
I'd hardly call three examples "many examples" (especially if they all originate from greece and rome).

Blow just says "then the roman empire fell and that knowledge was lost" when discussing the Lycurgus Cup. He just states this. No citation or investigation. This is the classic oversimplification of both technological record keeping and the nature of the collapse of rome. There is no clear evidence that the "fall of rome" (which means about a dozen different things that happened over 1000 years) is responsible for the mechanism for creating such glass being unrecorded.

He makes the same claim about alchemists fire. Again, ill supported.

There are a tremendous number of reasons why ancient technology might not have surviving records to today. Amateur historians love to just assume this is due to "civilizational collapse."

mechEpleb
Historians are typically people with an utter lack of technical skills or any understanding of what actually goes into making anything work. A dry surface level analysis of literary accounts and archaeological finds doesn't mean much if you lack the mental context for it.
myWindoonn
I haven't watched the talk. During the Late Bronze Age Collapse, civilizations around the Mediterranean lost the ability to write! Before the Collapse, we see lots of Egyptian hieroglyphs and Sumerian cuneiform. Afterwards, we have a period of quiet for about a century, and then Phoenician emerges and starts splitting into all of the abjads and alphabets that we see today. This is how we know, for example, that many Iron Age writings are talking about Bronze Age myths; they're written in the wrong language for their time period.

We still don't know how to read Linear A; it is on the wrong side of the Collapse and we don't have a Rosetta Stone or other transitional artifact yet which includes it. It is not unreasonable that it could take us millennia to recall how to do things which we've forgotten.

ziaddotcom
These historians of science that technologists avoid consulting in your estimation, are they going to refute the dark ages?

https://en.wikipedia.org/wiki/Dark_Ages_(historiography)

"The term employs traditional light-versus-darkness imagery to contrast the era's "darkness" (lack of records) with earlier and later periods of "light" (abundance of records).[3]"

Not having records being categorized metaphorically as darkness, and more records as "Enlightenment" is pretty high regard for a word used the same way in the contemporary context as something an accountant or dentist keeps in a filing cabinet.

I'd say these "historians of science" you speak of would corroborate that the lost works of Euclid https://en.wikipedia.org/wiki/Euclid

and the near permanent loss of Plato's works to the whole of Europe for hundreds if not over a thousand years as, uh, kind of a big deal.

https://en.wikipedia.org/wiki/Plato#Textual_sources_and_hist...

meheleventyone
Loss of records is not loss of skills or knowledge though. And the Dark Ages wiki you link talks about how the term fell out of use amongst historians for being inaccurate.

To quote:

"Science historian David C. Lindberg criticised the public use of 'dark ages' to describe the entire Middle Ages as "a time of ignorance, barbarism and superstition" for which "blame is most often laid at the feet of the Christian church, which is alleged to have placed religious authority over personal experience and rational activity".[52] Historian of science Edward Grant writes that "If revolutionary rational thoughts were expressed in the Age of Reason, they were made possible because of the long medieval tradition that established the use of reason as one of the most important of human activities".[53] Furthermore, Lindberg says that, contrary to common belief, "the late medieval scholar rarely experienced the coercive power of the church and would have regarded himself as free (particularly in the natural sciences) to follow reason and observation wherever they led".[54] Because of the collapse of the Western Roman Empire due to the Migration Period a lot of classical Greek texts were lost there, but part of these texts survived and they were studied widely in the Byzantine Empire and the Abbasid Caliphate. Around the eleventh and twelfth centuries in the High Middle Ages stronger monarchies emerged; borders were restored after the invasions of Vikings and Magyars; technological developments and agricultural innovations were made which increased the food supply and population. And the rejuvenation of science and scholarship in the West was due in large part to the new availability of Latin translations of Aristotle.[55]"

ETA: Also didn't realise you're the same person I replied to below!

ziaddotcom
No worries if you didn't catch that I was the same person.

Your catch proves that issue is not remotely settled, and to my mind, indicates that more rigorous counsel with historians will only add depth to debate, rather than resolve it.

meheleventyone
I don't think UncleMeat's frustration is that these things are easily resolved but that people aren't getting even the basics right. Like the continuing lay beliefs about "the dark ages". And by doing so people are constructing false narratives of how knowledge has been lost from their own ignorance.
ziaddotcom
I think UncleMeat's wife probably knows a hell of a lot more than I do about history, and contextualizing whatever horseshittery that has been propagated about the notion of the dark ages. The problem is rather than provide a link to a book or an article in refutation to Jonathan Blow or myself, UncleMeat has sort of just rattled off some history and said their wife is a history professor. Hardly what I think UncleMeat would have been satisfied as enough scholarly rigor for Jonathan Blow his talk with remotely the same framing. The retort along the lines of "but this is just a lil ole internet forum" goes out the window, when the call for rigor was made on said forum by said person.

I would suggest UncleMeat just advocate that people like Jonathan Blow take the history out entirely, and argue why this contemporary technology isn't good enough relative to our resources and skills.

meheleventyone
Personally I think UncleMeat's criticism is dead on and exactly why as you say Jon should take the historical argument out. Although you'd get a much less pithy title without collapsing civilizations.

Arguments about sourcing feel like face saving rather than legitimate criticism of the view bought forward by UncleMeat. After all the original video is completely unsourced but apparently beyond reproach in that regard. Also where we have sourcing (for example the Dark Ages wiki) it only strengthens the point made by UncleMeat as they were indeed correct about the views of historians.

More broadly if we want discussion to be well sourced we shouldn't apply that selectively and in particular selectively against the people who disagree with us.

ziaddotcom
Fair enough, but I enjoyed the talk as is. I tried to give sources in defense of UncleMeat but I can only overcome my own bias so much.

I'm not interested in getting suckered into believing some randall carlson gobbledygook, the type I might leap to assume UncleMeat would find quite problematic.

I just think UncleMeat initially came in, said something along the lines "this is bad, don't do this, ask people who know better" and it has been made clear they had the capacity to contribute more that what read to me as nothing more than the logical fallacy that is an appeal to authority. They have clarified their position enough to where I can no longer be left with such impression.

UncleMeat
> These historians of science that technologists avoid consulting in your estimation, are they going to refute the dark ages?

Largely yes. The term "the dark ages" was developed by Petrarch, a man who was so obsessed with the Romans that he had himself crowned with laurels to signify his inheritance of Rome. And this is also largely unrelated to history of science, since "the dark ages" (as expressed by both Petrarch and pre-20th century historians) does not refer to loss of scientific knowledge.

> Not having records being categorized metaphorically as darkness, and more records as "Enlightenment" is pretty high regard for a word used the same way in the contemporary context as something an accountant or dentist keeps in a filing cabinet.

But Petrarch was hundreds of years before the Enlightenment and would have hated it. He believed that the Romans were the peak of civilization ever. It was not possible to top them. The idea that modern humans could improve upon what they did would have been anathema to Petrarch and most Renaissance thinkers.

But most importantly, these historians of science would take issue with Blow's conclusions from specific examples. He lists a few "lost" technologies and then concluded methods to prevent technological loss that are entirely unrelated to the examples he lists. At most, all they demonstrate is that it is possible for some things to go unrecorded.

ziaddotcom
The Jstor article that wikipedia cites as evidence of Petrarch "being the first to develop the concept of 'the dark ages'"

https://www.jstor.org/stable/2707236?seq=1

The same wikipedia article says that Petrarch noted for initiating the Italian Renaissance by way of rediscovering letters of Cicero who happens to have died in 43bc. He is then credited with founding Renaissance Humanism in the context of the above.

The above claims seem some plausible given the summary texts that mention Petrarch given by the page on Humanism given by the Library of Congress

https://www.loc.gov/exhibits/vatican/humanism.html

One recontextualization of the concept of the dark ages of the type implied by UncleMeat (my interpretation what was implied, not UncleMeats) is given by this publication by The University of Michigan.

https://www.press.umich.edu/15299

This is all very interesting, but I'd like to point out absolutely none of it seems to automatically leave the dilettante historian like myself from coming away with the impression that knowledge, scientific or otherwise, has not been lost in a meaningful way and thus throughout history did not necessitate some obsessed persons to resuscitate it.

ziaddotcom
I think you've just committed a sort of armchair dilettante history off the top of the dome with lack of reference to a scholarly source of History you accused Jonathan of doing.

meheleventyone has demonstrated how easily even just referencing the article I myself posted could undermine my argument.

UncleMeat
> I think you've just committed a sort of armchair dilettante history off the top of the dome with lack of reference to a scholarly source of History you accused Jonathan of doing.

I'm not a historian. But my wife is. And she ranted for a while after the first ten minutes of this talk for precisely this reason. I could have provided more scholarly sources. It is an internet forum - I didn't think people would call me names if I didn't. That historians of science don't like the term "The Dark Ages" isn't exactly cutting edge scholarship.

ziaddotcom
I honestly don't see how you can't see how the text in you provided response to Jonathans talk, which he obviously put a lot of time and research in, no matter how misguided and poorly executed in your or your wife's estimation, wouldn't be massively hypocritical.
UncleMeat
> which he obviously put a lot of time and research in

I don't believe that he did put a lot of research into the historical background. It is possible I am wrong. The reason I believe this is because at least one historian I trust who works in an adjacent subfield went on a big rant about these being classic misconceptions.

I believe that Blow is very smart and quite knowledgeable about software. I also believe that a lot of engineers believe that they can make huge claims about the nature of history without consulting professionals. These errors are often harmless but the are also often harmful, either causing us to draw false conclusions about the past or today.

Like you say below, Blow would have been better off skipping the first 10m of his talk.

ziaddotcom
The use of the phrase "a lot" by me in this context isn't really helping. I should have at least said "for a powerpoint."

Unfortunately as far as I can tell, we don't have a circuit of public intellectuals, who happen to be historians, giving well thought through talks to lay people. We have ted talks on a good day, and those are often filled with dumb things said by smart people who should know better.

You and I haven't even touched with a ten foot pole the other side of the coin of that "Western History" implies. I won't attempt to in this thread either.

pharke
> (maybe in a hundred years, but if that truly happens, there will be no way to fix it)

He also makes the point that collapse often happens over the course of many generations. ~100 years for the bronze age collapse and ~300 years for the collapse of the Roman Empire. I think this is a key point because it is something we don't intuit easily. He then goes on to say that the loss of knowledge that results in such collapses is often the result of a lack of generational transfer of knowledge (hence the slow collapse). This is a well known problem, the Bus Factor. If we want to prevent the collapse of our civilization we need to vastly increase that number on a societal level.

Dec 12, 2020 · 3 points, 0 comments · submitted by syl_sau
I buy the thesis as stated; there's a common attitude among researchers of, "why don't we have this already?" that seems counter-productive.

However, there's an adjacent thesis that's also worth stating: fundamentally improving programming is hard, but still worth attempting. There's just too much risk with the current status quo. We've built up our world atop a brittle, insecure, unstable tower of software, and it doesn't feel unreasonable to ask if it might lose capability over time[1].

The good news: you don't have to give it all up all at once and return to the stone age to try something new, as OP says. There's nothing stopping us from using the present as we like to build the future. The key, it seems to me, is to prevent the prospective future from inter-operating with the present.

You won't get lots of users this way, but I think jettisoning the accumulated compatibility baggage of 50+ years of mainstream software might free us up to try new things. Even if the world doesn't switch to it en masse, it seems useful to diversify our eggs into multiple baskets.

Here's what I work on: https://github.com/akkartik/mu. It's a computer built up from machine code, designed to be unportable and run on a single processor family. It uses C early on, but it tries to escape it as quickly as possible. Mu programs can bootstrap up from C, but they can also build entirely without C, generating identical binaries either way[2]. (They do still need Linux at the moment. Just a kernel, nothing more like libc.)

I call this ethos "barbarian programming", inspired by Veblen's use of the term[3]. I rely on artifacts (my computer, my build tools, my browser, my community support system, the list is long) from the surrounding "settled" mainstream, but I try to not limit myself to its norms (compatibility, social discouragement of forking, etc.). I'm researching a new, better, more anti-fragile way to collaborate with others.

Here's a 2-minute video I just recorded, experimenting with a new kind of live-updating, purely-text-mode shell built atop the Mu computer: https://archive.org/details/akkartik-2min-2020-12-06. The shell is implemented in a memory-safe programming language that is translated almost 1:1 to 32-bit x86 machine code. Without any intervening libraries or dependencies except (a handful of syscalls from) a Linux kernel.

[1] https://www.youtube.com/watch?v=pW-SOdj4Kkk

[2] http://akkartik.name/akkartik-convivial-20200607.pdf

[3] https://www.ribbonfarm.com/2011/03/10/the-return-of-the-barb...

The author of this article, Samo Burja, also gave a great lecture in 2018 called “Civilization: Institutions, Knowledge and the Future” (https://youtu.be/OiNmTVThNEY) which was mentioned in Jon Blow’s popular 2019 talk on software complexity, “Preventing the Collapse of Civilization” (https://youtu.be/pW-SOdj4Kkk). Burja’s talk is worth watching if you’re interested in his analysis of technology and history over an even longer period, and it provides some context for his remarks at the end of the article about intellectual institutions and the benefits of centralisation.
Jonathan Blow recently talked about this in https://www.youtube.com/watch?v=pW-SOdj4Kkk. Maybe this is where the author got the idea of noting them down?

These discussions usually invite lots of hand-wavy complaints and oppositions without more concrete progress. Out of boredom and to further better dialogues, I've tried to address every bug in that list:

> iOS 14 discharged phone battery 80% to 20% during the night (no activity, much worse than iOS 13).

There's a new AI system since before 14 that monitors your battery usage and e.g. refrains from charging during certain times, among other features. It's likely that this system got tweaked (as opposed to sudden battery failure and recovery).

> YouTube.app randomly scrolled to the top.

Dunno about this one. Did you touch the status bar at the top of the screen?

> Instagram reset scroll position after locking/unlocking the phone.

Probably forgot to add that bookkeeping to the before-locking hook, and/or the hook before getting evicted from memory.

> Race condition in keyboard in DuoLingo during typing.

Don't use DuoLinguo anymore. Can't comment.

> AirPods just randomly reconnected during use.

Bluetooth?

> Shortcuts.app stopped reacting to touch for ~30 sec.

Hard to say. Undefined state/exception bugs maybe.

> Wondered why my apps were not up to date, found nine apps waiting for manual button click.

Push/pull model problem, battery conservation heuristics, server's notification scaling being best-effort, etc.

> Workflowy cursor obscured by Workflowy toolbar, typing happened behind keyboard

Workflowy's iOS app uses web technology. The keyboard + floating bar layout is a recurring problem with said tech.

> AirPods showed connected notification, but sound played from the speaker.

Bluetooth...?

> Passcode unlock worked for the third time only.

Never happened personally. Can't comment.

> Overcast widget disappeared while switching to another app.

That one's almost 100% in the animation system's bugs introduced in iOS 7. Tldr uninterruptibility + special thread/process causing extra undefined state + GPU.

> YouTube forgot video I was just watching after locking/unlocking the phone.

Same as the instagram diagnosis.

> YouTube forgot the resolution I chose for the video, keep resetting me to 360p on a 750p screen.

Dunno. No longer use YouTube app. Network? Sometime the UI can be misleading. The quality option might be just a best-effort option that it doesn't guarantee to respect. Someone else check this.

> 1 hour lost trying to connect 4k @ 120 Hz monitor to MacBook Pro.

Definitely not enough stress/integration testing, so it's unsurprising that anything might happen. Sometime provably impossible to get right due to neither party controlling the whole stack.

> Workflowy date autocomplete keeps offering me dates in 2021 instead of 2020.

See earlier. It's not using the native NSDate (workflowy uses Momentjs). Plenty of room for error. NSDate's api usually won't nudge toward things like off-by-ones (I think).

> In macOS context menu, “Tags” is in smaller font

Intentional.

> Transmission quit unexpectedly.

And slowy =P. Likely due to exception/mishandling of memory.

> Magic Trackpad didn’t connect right away after boot, showed the “No Trackpad” window.

Lots of preemptive races possible here.

> Hammerspoon did not load profile on boot.

Dunno. I don't use it.

> Telegram stuck with one unread message counter.

A few other chat apps do that too. Often from the denormalization of unread messages count in DB. That or something about the notification system.

> Plugging iPhone for charging asks for a software update.

It's a feature, not a bug ™ =).

> Dragging an image from Firefox doesn’t work until I open it in a separate tab.

That dragging is reimplemented using cross-platform tech I believe.

> YouTube fullscreen is disabled in an embed.

Intentional. This is an option the embedder needs to opt into.

> Slack loaded, I started typing a response, then it reloaded.

Depends by "reloaded". Without further description, it might be either a crash + browser-driven page reload, or some long in-app rerender due to React, state and network.

> Twitter was cropping important parts of my image so I had to manually letterbox it.

Quite a few pieces of drama surrounding this recently. Won't comment.

> TVTime failed to mark an episode as watched.

Never used it.

> Infuse took 10 minutes to fetch ~100 file names from smb share.

No batched api + other shenanigans. Happens to Reminders too.

Sounds like this might have been prompted by Jonathan Blow's talk, Preventing the Collapse of Civilization: https://youtu.be/pW-SOdj4Kkk

It's a shame that there isn't usually enough incentive in mainstream software practice (or business reality) to polish and make things work flawlessly - day to day life could be a lot nicer if we dived deeper and thought longer-term.

Sep 25, 2020 · 1 points, 0 comments · submitted by turing_complete
Sep 21, 2020 · Reedx on I no longer build software
Plus there's a massive amount of pollution, growing by the day. Everything is built on an increasingly shakey foundation and starts rusting right away. More time is spent on trying to figure out why X isn't working, less time on actually building.

A couple recommended talks about this subject.

Preventing the Collapse of Civilization: https://www.youtube.com/watch?v=pW-SOdj4Kkk

The Thirty Million Line Problem: https://www.youtube.com/watch?v=kZRE7HIO3vk

You've probably seen Jonathan Blow's talk on the subject: https://youtube.com/watch?v=pW-SOdj4Kkk

I'm super interested in the subject: https://github.com/akkartik/mu. In the framework of this thread I focus on 2 of the 3 layers: problem and software. Hardware is out of scope. For now. Recently I've started thinking about BIOS (inspired by https://caseymuratori.com/blog_0031). So perhaps it's only a matter of time.

dfischer
I haven’t seen the talk actually. I’m increasingly more interested in this problem space and it’s good to find like-minded people and ideas. Thanks for sharing. I’ll look into it!
> Use battle tested frameworks. For crypto, for sites, for everything.

In other words, don't roll your own… anything?

Just, no. Down that path is lost knowledge and the decline of our civilization. Enough of us must know the fundamentals. https://www.youtube.com/watch?v=pW-SOdj4Kkk

Quality Software : Yes, this is becoming more and more important and is overlooked at many levels. With more of our infrastructure, lives and the future of humanity in the hands of a bunch of software & hardware. We need to take things a tad bit more seriously. I believe this is not yet a big problem but it is becoming one.

https://youtu.be/pW-SOdj4Kkk

https://xkcd.com

dredmorbius
What xkcd comic did you mean to link? That's the homepage.
dredmorbius
I'm presuming you meant https://xkcd.com/2347/
I agree with the sentiment that they only augment our capabilities, but then we end up with another problem, which is that the people who write the software are not fully aware of how it all works. Relevant to my argument is Jonathan Blow's `Preventing The collapse of Civilization` talk in which he discusses the disappearance of knowledge as generations pass: https://www.youtube.com/watch?v=pW-SOdj4Kkk&t=3407s
fredliu
Cargo-Cult programming is already happening at a not-insignificant scale due to the vast amount of sample code/tutorial/Q&A online. There could be lots of nuances in that snippet you just copied from a SO comment, which "just works". GPT-3 based tools could help you generate a working (or barely working) sample fairly easily. But the decision to just ship that code as is, or to understand it and tune it for your specific needs still mostly depends on the developer.
MattGaiser
> the people who write the software are not fully aware of how it all works.

Is this much different from now? I have no idea how most of the libraries I use are implemented.

mrkeen
You don't know how other peoples' libraries work, and that's fine.

I think in this case the argument is that the author of the library also doesn't know how it works. Which means they can't fix it.

chosenbreed37
In addition to this, if the source code is available you could potentially take a peek and generally understand what's going on. But I can imagine that we could have a generation of developers who know little about nuts and bolts that underpin how their software works. Perhaps this is now already the case in certain domains. E.g. you could be working on a Jupyter notebook and be effective without being aware of what's happening behind the scenes. I think is qualitatively different as in this example you could be working at such a high level of abstraction that the nuts and bolts are not something you'd even be aware of. Whereas if you're writing a Java program and you bring in some third party libraries you could potentially look up that library. But more importantly you're still relatively close to the metal.
jefe_
I've found that I typically don't need to know how it works if my use case is common and the library is well documented, but when either of those doesn't hold it can be very helpful to read through the implementation in the source to understand how best to implement. But I think the black box would be less of an issue when interfacing with documented third party libraries, than with internally developed services and libraries, particularly in smaller orgs. Team A 'generated' Service A and Team B needs to integrate it in their 'generated' Service B, it seems that could get messy and would be tough to test or troubleshoot. Possibly an additional AI tool specifically for compatibility and integration could solve that problem.
wilburTheDog
I think that's unavoidable. There is always a level at which your understanding of your tools gets fuzzy and incomplete. Do you understand everything about what your compiler is doing? Your OS? Your could provider? The hardware any of them are running on? The majority of us are standing on the shoulders of the few that do understand those things and provide them to us to use.

And if you mean that we would not understand what the software we write is doing to produce the result it does, aren't we already there with machine learning?

commandlinefan
On the other hand, the better you understand the compiler/OS/hardware, the better software you'll be able to write. Just like medical doctors could theoretically do their job without a deep knowledge of, say, organic chemistry, I can imagine a near future where software development means tweaking the inputs to GPT-3 (or some other AI) based on a deep knowledge of the layers beneath it: sort of a "computer doctor".
In his talk "Preventing the Collapse of Civilization" (https://www.youtube.com/watch?v=pW-SOdj4Kkk) Jonathan Blow is defending similar ideas.
It costs so much mostly because programmers don't know how to write software cost-efficiently. As everybody follows the same path, VCs start to believe the cost is increasing when in fact it's the programmer's knowledge of cost-efficiency software design that is decreasing. Take a look at this: https://www.youtube.com/watch?v=pW-SOdj4Kkk

The costs to develop software are ridiculous and unrealistic. It turns out that if enough people act as ridiculous and unrealistic that becomes the new normal.

I took a couple of compiler courses in university, and I think for the most part it made me a better programmer, mainly because it enforced understanding of how specific programming concepts and constructs are generally implemented. That, and a more intense look at the "heap of abstractions" you're generally working with when you write a higher-level language.

I've always advocated that, if you want to really understand the tools you are using, you need to understand at least one level below the "surface abstraction" that you are working if. Even better if you can understand two or three levels down.

There's a great talk from game developer Jonathan Blow [1] that describes how knowledge is lost "generationally" due to our lack of understanding of the black boxes we build things on top of. Not sure I 100% agree with his thoughts, but it's an interesting take.

[1] https://www.youtube.com/watch?v=pW-SOdj4Kkk

The problem is that both hardware and software are garbage.

Spectre/Meldown & friends are just the tip of an iceberg. We have layers & layers of indirection/abstraction everywhere. We have hardware that lies and tells you that it has certain properties when in reality it doesn't (example: sector sizes in hard drives/NVMs, processors still pretending that they behave like PDP-11), we have hardware that is flat out broken. We try to fix those issues in software.

But in the software, we have another dump of workarounds, dependencies, abstractions with a sprinkle of backward compatibility. We are now creating "minimalist" applications with a fraction of functionality of the software from 30 years ago but using so many layers that total amount of code used to make it work is many orders of magnitude larger than what we had back then.

I know that most of the programmers did not work with systems where it's very, very easy to debug the whole stack and you can learn it in a short period but it's amazing when you have knowledge about EVERY part of the system in your head.

There are some good things going on (like strive to replace C with something which has similar performance characteristics but without its flaws) but it's not enough.

Here are two things worth watching:

https://www.youtube.com/watch?v=pW-SOdj4Kkk - Jonathan Blow - Preventing the Collapse of Civilization

https://www.youtube.com/watch?v=t9MjGziRw-c - Computers Barely Work - Interview with Linux Legend Greg Kroah-Hartman

matheusmoreira
> we have hardware that is flat out broken

Reading Linux driver code is very informative. Sometimes hardware just doesn't do what is expected and the driver must try and fix it up so that user space can have a reasonable interface.

A simple example:

  /* The ITE8595 always reports 0 as value for the rfkill button. Luckily
   * it is the only button in its report, and it sends a report on
   * release only, so receiving a report means the button was pressed.
   */
  if (usage->hid == HID_GD_RFKILL_BTN) {
      input_event(input, EV_KEY, KEY_RFKILL, 1);
      input_sync(input);
      input_event(input, EV_KEY, KEY_RFKILL, 0);
      input_sync(input);
      return 1;
  }
Dunedan
It's even more interesting when you can infer why its broken. Let's take the NVMe controller in the MacBook Pro 2016 and later for example: That controller is not properly detected by Linux and needs a quirk [1] to be identified by its PCI device id instead.

Why is that? Well, Linux usually detects NVMe devices based on their PCI class. The class for NVMe devices is 0x010802. So guess what the Apple controller provides as class id: 0x018002. If you have to compare the ids twice to notice what's different, you're not alone. My guess is that this subtle difference is just a human error made by an Apple engineer, which wasn't caught during QA and macOS simply works around it as well or doesn't use PCI class anyway.

So for the same reason software isn't perfect, hardware (or the firmware powering that hardware) is neither.

[1]: https://github.com/torvalds/linux/blob/b791d1bdf9212d944d749...

[2]: https://lists.infradead.org/pipermail/linux-nvme/2017-Februa...

scroot
Simplicity and good design take lots of time and money. Our culture is not truly ready to make these kinds of investments in the manner required. Why would they? There is a whole universe of FOSS out there upon which anyone can cheaply create "working" software. If your goals are short term (quarterly earnings, looking only a year or two down the road) this is "good enough." Worse, that FOSS foundation is typically filled to the brim with complexity. We have created a computing culture that is premised on pushing the extremes of the teletype model of computing, and tacking what customers think they want on top of it.

We have good alternate examples from the past (Oberon, Lisp machines, Hypercard, Smalltalk systems, etc). How often does the new generation of computing people get exposed to these ideas?

de_watcher
No, FOSS/proprietary is orthogonal to that. You can find simple and performant software. The problem is in something else.
zozbot234
> Worse, that FOSS foundation is typically filled to the brim with complexity.

Really? In my experience, FOSS tends to be a lot simpler and more streamlined than non-free software with comparable functionality.

na85
I agree with 'de_watcher that FOSS/encumbered is an orthogonal axis to complexity/simplicity.

Lots of FOSS software is excessively complex (the systemd ecosystem of shovelware comes immediately to mind) and lots of FOSS is simple. Similarly there are untold thousands of overcomplicated/overengineered proprietary suites and of course it's hard for a graphical application to get simpler than notepad.exe.

asdfman123
And what software developers consider "simplicity and good design" often comes across to other people as "I have no idea what to do and looking at this literally gives me anxiety to the point that I want to avoid it."
ardy42
> And what software developers consider "simplicity and good design" often comes across to other people as "I have no idea what to do and looking at this literally gives me anxiety to the point that I want to avoid it."

Software UX is a garbage fire [1], but good software UX doesn't necessarily mean building things an untrained user can easily figure out how to use. That's just the orthodoxy we've happened to take with most software, which may also limit its potential. See https://99percentinvisible.org/episode/of-mice-and-men/.

cjfd
I don't think FOSS is the problem either. I think much complexity is required because everything is expected to live on the web and also/thereby is expected to be client/server. As soon as one wants to do more than just display a document on the web one discovers that its architecture is not very suitable for doing anything else besides displaying a document. A badly formatted document at that..... Also the expectation of client-server communication is a big driver of complexity. As soon as one has that we have network communication, serialization and so on. I.e., stuff that is on the large side of things to write oneself. Of course, with the web one more or less has a client-server architecture by default.
bob1029
It would seem there may be a lack of appreciation for how powerful a modern x86 CPU actually is. Even when you apply every side-channel mitigation in the book, these processors are incredibly powerful when used appropriately. Somehow x86 is being branded as this inferior compute facility, always at odds with the GPU/ARM/et.al., and I feel it mostly boils down to shitty software engineering more than anything else.

I believe a lot of this can be attributed to rampant, unmitigated abuse of the NUMA architecture exposed by the x86 stack, as well as a neglect for the parallelism offered by CPUs of the last 3-4 generations. Most developers are not following principles which align with what the hardware can actually do, rather, they are following principles which align with what academia/cult-wisdom says you should do. These two worlds are vastly at odds when it comes to keeping your L1/2/3 caches as hot as possible and keeping all of those cores busy with useful work.

For example, most software engineers would tell you that busy waiting is a horrible practice, but when you step back and pay attention to the hardware reality, you have 32+ threads to burn, why not put your ultra-high-performance timer as a high-priority process on one of those threads and let it check trigger conditions in a tight loop? You can get timing errors measured in 10s of nanoseconds with this approach, and it's dead simple. Now you only have 31 threads remaining. 30 if you really want to make sure that timer runs smooth. Running at 3+ ghz, that one thread could service an incredible number of events per second. There are new ways to think about how we build things given newer hardware capabilities.

I feel a lot of the high-performance software revolution is going to come out of some ideas that have been floating around in fintech. Frameworks like the LMAX Disruptor (and the ideology behind it) can serve as the foundation for a UI framework capable of transacting tens of millions of aggregate user events per second and with peak latencies measured in microseconds. I have personally started to dabble in this area, and the results after just a few weekends have been very encouraging. You would be surprised what even high level languages (Java/C#) are capable of when their use is well-aligned with the hardware.

pwdisswordfish2
Jonathan Blow: "Software has been freeriding on hardware."

True or false? If I make the code that does a particular task smaller, faster, less resource-intensive, then I am not freeriding.

He says people do not reference the "five nines" anymore. True? I do not work in the industry anymore. I had no idea this has disappeared. That is really sad.

"Developer time" versus "user time". What is more important? Are they equally important?

Recently someone posted a video of a recent talk from Margo Seltzer. She said users, e.g., in the Physics Department, do not care about computer languages and these things that computer scientists and programmers think are so important. They care about how fast the program runs. That's all. "Make it go faster".

The incentives seem backwards. We pay programmers more today to do less than they did in the past. There is a lot of "busy work" going on.

MaxBarraclough
You might enjoy this 2016 article by Chuck Moore (the Forth guy). His position is pretty extreme, as he dismisses static typing as needless complexity, thinks even C is too elaborate, and he doesn't touch on web technologies, but still worth a read. [0]

Also, mandatory link to the Software Disenchantment article. [1]

[0] https://web.archive.org/web/20160311002141/http://colorforth...

[1] https://tonsky.me/blog/disenchantment/

ponker
In spite of all this "garbage" I carry around a $500 machine which fits in my pants pocket and give me directions to anywhere I can think of, a live video call (!) with my friends or family, a vast trove of knowledge about millions of different topics, and a camera that basically matches my $3000 DSLR from six years ago. And these are available whenever and wherever I want, so I can do this video call at 2am on a mountaintop if I want. So... I love this garbage.
ecf
I too sometimes have the same feeling that the software stack we have today is just a tower of cards waiting to crumble.

How long do you think it would take to get back to where we are if everything was scrapped and we restarted with the initial binary -> assembly jump?

taneq
This is kind of like someone living during the Crusades yelling metallurgy is shiiiiit.
BruceEel
> https://www.youtube.com/watch?v=pW-SOdj4Kkk - Jonathan Blow - Preventing the Collapse of Civilization

Priceless, thanks for sharing.

sharkjacobs
This doesn't seem to have anything to do with the article.

Am I wrong to wish that the top voted comment was by someone who read the article before they posted?

darepublic
Well is the solution more tech (like improving on C language) or is it making sure we have robust non technical fallbacks as a civilization
bcrosby95
I find it interesting that your two links seem to contradict each other.
trimbo
> The problem is that both hardware and software are garbage.

I think it's incredible that, in my lifetime, computers went from giant mainframes with dedicated hard-line terminals to always-connected supercomputers in everyone's pocket, worldwide. Furthermore, anyone can use the internet to learn how to program.

Maybe that's garbage compared to some mythical ideal but in terms of impact on the world it's incredible.

> I know that most of the programmers did not work with systems where it's very, very easy to debug the whole stack and you can learn it in a short period but it's amazing when you have knowledge about EVERY part of the system in your head.

Well, you can tell from the above that I was around then. I started programming with a single manual and the Beagle Brothers "Peeks, Pokes and Pointers" cheatsheet[1].

People forget that the software itself had to do much less than it does today. Here's just one angle to judge: security. We did not have a worldwide interconnected network with so many people trying to steal data from it. We all used rsh and packets were flying around in cleartext, no problem. But today, all software will have to incorporate TLS.

And far fewer people built that simpler software. EA's first titles were written by one or two people. Now a typical EA title has hundreds of people working on it.

Things will get better than where they are today. In the future, the industry will have invest more money in "10X" productivity and reliability improvements. Eventually, I think that will happen as productivity continues to slow on large codebases.

[1] - https://downloads.reactivemicro.com/Apple%20II%20Items/Docum...

nightski
Nothing wrong with being excited by the progress made but I think engineers (well, especially myself) tend to be critical of things because it is in our nature. We see what they could be, not what they are.

I think vocalizing how bad the current situation feels is the only path to improvement.

You seem convinced that things will get better. I don't think all of us are that optimistic (and I am generally an optimist!).

Nextgrid
And yet, even doing something as basic as a smart doorbell requires a backend server somewhere (essentially a mainframe) and all kinds of NAT hole-punching, UPnP and proprietary push notifications despite the supercomputer in your pocket technically being able to listen for incoming TCP or UDP packets from the doorbell directly.

The "supercomputer" processing power is also being wasted on all kinds of malicious and defective-by-design endeavors such as ads, analytics, etc (install any mainstream app and look at the network traffic, 90% of it will be for analytics and can be blocked with no ill effects).

Despite the supercomputers being 10x faster than the early ones (back in the iPhone 3G days) we somehow lost the ability to render a 60fps non-stuttering UI despite modern UIs being less rich and consisting mostly of whitespace.

> anyone can use the internet to learn how to program

I think there used to be a "golden age" of this where the resources were all available for free and at the same time the stacks were manageable (think basic PHP deployed on shared hosting or Ruby or Django) where as nowadays it is considered "wrong" if you don't use Kubernetes, 10 microservices and 100+ NPM packages just to serve a "Hello world" page to a browser.

trimbo
I agree that software has gotten overly complex for the benefit, and Kubernetes is a good example. But it will improve again.

You mentioned Ruby and Django... the popularity of those in the Aughts were a swing back towards simplicity from overly complex Enterprise Java. Remember?

asdfman123
I wish smaller shops had some smaller role model to follow instead of your Facebooks or Googles. No, we don't need the same tools they use, because they use those tools to deal with deep organizational and technical complexity. If you don't have deep organizational complexity, and especially if you don't have separate data stores, you don't need microservices.
tarsinge
It’s exactly my philosophy in the small shop I founded, it’s a real struggle to educate both developers and clients.
Nextgrid
Are you an agency? I'd love to hear your experience with this problem. As a contractor I find it very hard to find clients with "sane" stacks to work on. Even early stage companies nowadays (who would typically get contractors to build their MVP) seem to already settle on an overcomplicated mess despite their scale not warranting it. Maybe I'm just looking in the wrong place.
tarsinge
Yes we are an Agency, so we are fortunately able to choose the stack, but still have to convince (it hopefully gives us an edge on pricing). It’s a reality clients are looking for buzzwords because that’s what they hear and they want industry standard, cannot blame them. From what I see in our niche what we are choosing to do is the exception, so indeed I imagine it must be hard as a contractor to find simple stacks to work on.
asdfman123
It's easy to do, too. Just make boring software. C# is fine. SQL server is fine. A monolith is fine. That's it. (Replace with your favorite language.)
Nextgrid
Making boring software is easy. Getting other people to pay for it is harder in a world where everything needs to be Kubernetes, React, AI or Blockchain.
mechEpleb
You don't need a backend server to make a smart doorbell that pings your phone over Wlan. You only need one if you want to add loads of non-trivial functionality for non-technical users who might not be on the same network. You're welcome to build your own device with a wifi capable microcontroller and a simple phone app directly accessing the socket interface.
asdfman123
It requires all of those things because users want a doorbell that will easily connect to WiFi and a nice phone app, that includes not only movement alerts but also a miniature social network. And the startup making the doorbell wanted to make a quick proof of concept to get funding and then build organically using a flock of less expensive younger developers.

Alternatively, a company could invest money into writing something that looks beautiful to software developers that you could SSH into. The architecture would be sound because several gray-beared devs would talk about the relative merits of different approaches. It could offer so much more functionality if the user is willing to acquire minimal knowledge of Linux. The only problem is that the only people interested in it would be other software developers.

We're building stuff for people, not other geeks. Businesses invest in software as a means to an end, and if the backend is ugly but accomplishes that end, then it's successful.

codeisawesome
...what? The doorbell can't talk to the phone because the supercomputer in our pockets is not really ours - it's functionality is gimped by the various software and operating systems providers that control it from the respective motherships.
bcrosby95
The doorbell can't talk to the phone because bluetooth can't reach across my house much less across the city when I'm at the store.
aspenmayer
This is accurate, but is not the whole story. I’m running a terminal on iOS right now to ssh into my Linux server.[1] There are terminal emulators on Android too.

I wish there were more GUI apps centered around hybrid cloud/shell use cases. I would like to be able to make GUI widgets to do things in a ssh session on my server. I’m not sure how important it would be to run on the device; it could be a webapp I run on the server itself to trigger scripts. It’s a UI/UX that centers around touchscreen input, is reconfigurable, and can perform arbitrary commands or events server-side, which I find lacking. Anyone know of tools that scratch this itch?

[1] https://github.com/ish-app/ish

yellowapple
I ain't sure how "I can run a terminal on a phone" has much to do with "I have full control over the physical machine I paid for and ostensibly own". Unless you're sideloading via jailbreaking (which has a score of problems, not the least of which being that Apple is hell-bent on "fixing" jailbreaks, resulting in a cat-and-mouse game with no end in sight), your ability to run that terminal is exclusively the result of your device's manufacturer explicitly deeming that particular terminal (and the programs it executes) worthy.

Android is slightly better in this regard in the sense that it's (usually) easier to sideload applications, and sometimes even possible to modify or outright replace (most of) the operating system, but this, too, is subject to the whims of manufacturers rather than being something the user (and the user alone) controls.

----

On another note, I, too, would be interested in scratching that itch. It seems like it'd be straightforward to throw together.

efreak
My understanding is that Android is going to start disallowing execution of downloaded binaries, and they'll need to be in an apps lib folder. See the termux issues on GitHub for a discussion of this.
spease
> It requires all of those things because users want a doorbell that will easily connect to WiFi and a nice phone app, that includes not only movement alerts but also a miniature social network.

You’re missing the point. The issue is that the underlying layer is more complex than it needs to be, not that the company needs to use that underlying layer to solve business requirements.

This is analogous to a freeway that’s too small.

Karrot_Kream
> users want a doorbell that will easily connect to WiFi and a nice phone app, that includes not only movement alerts but also a miniature social network

I think "users" is a stretch here. Having worked at a couple $MEGACORPs and Unicorns, the bar for a feature being successful is very low. Most of the time, features are driven by Product Managers on product teams that come up with ideas. Validation of the success of these ideas (unless they are highly visible and core to the business), in my experience has been minimal (was there a regression? no, then great!), and don't even get me started on the numerous methodological issues.

> We're building stuff for people, not other geeks

I think computing is unique in how much we (the royal "we" here, and I have nothing more than anecdata to back these observations up, so take them with a grain of salt) focus on trying to hide the internals, almost as if it stems from an embarrassment with what computing is, as something only for "geeks". How often do you hear of musicians receiving censure for not making music that "other people" listen to, or artists receiving criticism for art that "regular people" consume? Obviously, business needs necessitate a tradeoff between beauty and functionality in any field, but despite the ubiquity of tech, it feels to be one that is uniquely embarrassed by the art and technique behind it. Maybe this is just an outgrowth of the "nerds are weird" culture of the '80s and '90s?

I think the reason that users put up with such bad software is twofold:

1. Computing is new, and the general population doesn't really understand what it means to be good at it yet. The general population has enough education with things like woodwork to understand what a shoddy table is, but not yet what shoddy software is. That said, I know several non-technical users that prefer using the iOS ecosystem because it has lower perceived input latency and higher perceived stability (much like the OP of the article), so users are certainly not blind to the problems of software.

2. Software as a field is deeply embarrassed about its technical internals. The fact of the matter is, we don't need to be worried about "our grandparents" being able to use our software anymore; the vast majority of young folk in industrialized countries have spent their whole lives growing up using technology. Yet, we are still obsessed with the creation of a dichotomy between "graybeards" and "normal people", or "geeks" and "average users". We need to stop treating users as these opposed forces, hidden from the innards of the technical beast, and instead embrace them as part of what it means to create a successful product.

asdfman123
> How often do you hear of musicians receiving censure for not making music that "other people" listen to, or artists receiving criticism for art that "regular people" consume?

All the time, when those artists have outside investors who are investing to make a profit off of their work. It's a recurring theme in pretty much every music documentary I've watched. Accessibility is frequently at odds with artistic development. At some point, people just want you to crank out the same boring (widgets/songs/art/software) that you've done a million times, and do it cheaply.

Indie rock artists who are just playing in bars to their friends don't have that economic pressure, and you're always welcome to build your own software for your friends along with maybe a few hardcore enthusiasts.

yellowapple
> All the time, when those artists have outside investors who are investing to make a profit off of their work.

Prioritizing commercial gain over intellectual and creative stimulation and enrichment is, on that note, one of the biggest issues I have with the modern music industry. My views don't change much if you replace "music" with "film" or "television" or "journalism" or, to the point, "hardware" or "software".

Karrot_Kream
> Indie rock artists who are just playing in bars to their friends don't have that economic pressure, and you're always welcome to build your own software for your friends along with maybe a few hardcore enthusiasts.

I don't think it's as simple as "indie artists" and "artists with investors". There's room for innovation while making profit, and there's room to make catchy songs when you're an indie artist. In software, there seems to be a large divide between hobbyist code and industrial software. How many folks really write software for themselves and friends? I'd love to see a world where people grow up tinkering with software, so they can have an informed stake as a user instead of being a blind consumer.

d_tr
> We're building stuff for people, not other geeks.

Of course, but everyone, including the end user, would greatly benefit from a less messy and more elegant, consistent "computing stack". The programmers would be (much) happier, the code would be smaller and better, the product would be better and cheaper and the freed resources could be allocated elsewhere.

These improvements (and IMHO there is huge potential for such improvements on all the layers) would bring the same kind of benefits that better automation has brought.

ska
These arguments have been made, mutatis mutandis, roughly since the beginning of computing. At least, certainly since the 70s/80s.

Gabriel articulated some of these in "Worse is Better" (89?) but it wasn't new then.

groby_b
The end user does not care about the "computing stack" at all. "Does it work" and "is it cheap enough" are the main considerations.

And "the product would be better and cheaper" is really wishful thinking. I've worked in this industry for ~4 decades, so I really remember vividly working on those "less messy and more elegant" systems, and no, building a remote-controlled doorbell with life video on one of them would neither be better nor cheaper.

The sheer idea of doing this on a 128B machine (Olivetti Programma 101, my first machine) is ludicrous. The idea of processing any kind of video on a DG Nova Eclipse(second one) is... ambitious.

The first time we had machines with the processing power to do useful things with video for an at least semi-reasonable price was sometimes around the introduction of the Pentium, with dedicated hardware to process the incoming camera stream. I happen to know because I moved a video package that required a cluster of SGI machines to said P90.

Yes, the code was small. The video processing core was also the result of 6 rather sleepless weeks, squeezing every single cycle out of the CPU. I couldn't just grab OpenCV and make it happen. It also involved lots of spectacular hacks and shortcuts that happened to work in the environment in question, but would break down for a general purpose video processing system.

Around that same time the world also just barely had figured out the basic ideas behind making WiFi work usefully. But let's assume we would've just hard-wired that camera. If you wanted to do that with an embedded system, you wrote your own networking stack. That takes about 18 months to make work in anything that's not an extremely controlled environment - it turns out people's ability to misread networking specs and produce barely non-compliant systems is spectacular. (Source: Been there, done that ;)

So now we have the most basic system working. We have, however, no way of storing the video stream without more dedicated hardware, because realtime compression is not possible. (And if you think that dedicated hardware was less messy and more elegant... /lolsob)

Was the total codebase smaller? Yes. Were the programmers happier? F# NO. Would it have had anywhere close to the functionality that something like Ring has? Yeah, really, no. It's size would've been roughly a large desktop tower. It's cost would've been ~$5000.

The "good old days" really weren't.

efreak
A remote controlled doorbell with live video doesn't need video processing. It doesn't need to be digital. It doesn't need to be high resolution unless you're working on sensitive materials--for most people, low-quality analogue would be good enough.
groby_b
At this point you're saying "if something that didn't have most of the features of a Ring camera would be built, it would be much simpler".

That's a tautology.

It's also ignoring market realities. We had doorbell systems with crappy analogue cameras, and nobody wanted them because they were overpriced for what they did, and what they did wasn't particularly useful. (The "good enough" low-res analog cams made for great guessing game who's at the door, but they didn't actually suffice for recognizing who was at the door without them plastering their face right in front of the camera)

And this is true for other consumer electronics too - of course you can build something inferior with less effort. But nobody wants that. (I mean, it's not like modern systems get built because somebody just wants to lose some money)

This leaves the conclusion that you're mostly arguing for it because you liked the developer "simplicity", and... it wasn't simple. It became simpler once we introduced computers. It became simpler as we built abstraction layers on top.

Yes, current systems are harder to understand top-to-bottom, and we paid for ease of development with increased systems complexity. But the rose-colored glasses of a "better past" are just that. Yes, we could understand the things we built more easily, but we couldn't build what we build today in that past.

I'm reminded of Jonathan Blow's talk, "Preventing the Collapse of Civilization" [1]. Are we going to end up with a generation of game developers who don't understand the internals of their game engines? The unreal engine 5 demo has really wowed everyone but could we maybe reach a point in the future where these amazing game engines are struggling to compete against really inefficient game code?

[1] https://www.youtube.com/watch?v=pW-SOdj4Kkk

troughway
One reason why I’m not a fan of Unity. Game engines are leaky abstractions. You hit a point where you need to know how things in general work - even if you don’t know the implementation verbatim.

Most of anything to do with programming is a leaky abstraction so hiding source code from developers and saying “you don’t need to know that” is charming at best, idiotic at worst.

Net code issues, rendering equations and HSV knowledge, audio buffers and DSP, input controller and output display latency/VR, psychology, down to IEE754 optimizations. The more you know, the better for you.

friendlybus
As soon as you want to do something the engine doesn't provide you have to start learning. This happens way more frequently than you'd think.

I still think a JASS-like code editor would have been better than blueprints for giving news a mental framework for transitioning into C++.

michaelbrave
I'm curious what you mean by JASS, a search brings up things about card playing
friendlybus
Warcraft 3's scripting language it used in it's world editor.

Looked like this:

http://world-editor-tutorials.thehelper.net/dialog/dodialog....

meheleventyone
I don’t see this as realistic. Particularly when engines like UE4 and probably 5 share their source code and have a layer writing in amongst the engine cruft. Further engines like Unity whilst they don’t provide source don’t really mean you can make anything other than the simplest games without understanding what’s going on under the hood to a reasonable degree. Less technically adept game developers can get a good head start but generally will need to lean on technical colleagues or contractors.

Game development as a craft also requires a lot of creative problem solving which lends the people seriously involved in it to naturally push and probe at the edges of their knowledge.

29athrowaway
In C/C++, dependency management is not as convenient as in Rust. There's Conan, but you have to install it. Rust comes with Cargo.

So, in C++, there are more game engines developed using a minimal amount of dependencies, and in Rust, there are more game engines developed using more external dependencies.

As the majority of people transition to Rust, even the people working on "lower level" code will also forget what internals are about.

dang
Please don't use multiple accounts like this, or for voting. It's against HN's rules: https://news.ycombinator.com/newsguidelines.html.
Your annoying technology blog reminds of this Jonathan Blow talk where he talked about decreasing quality of software: https://www.youtube.com/watch?v=pW-SOdj4Kkk
...and they flipped open like Star Trek communicators.

Technology degrades: https://youtu.be/pW-SOdj4Kkk

COBOL can be a real pleasure to use.

For instance, it allows programmers to quickly create user interfaces by declaratively describing them, in a so called SCREEN SECTION.

The resulting interfaces are very efficient for text entry, and allow users to quickly get a lot of work done. The interaction is so much more efficient than what we are used to with current technologies that it can barely be described or thought about in today's terms.

Regarding the underlying theme of legacy systems, here is an interesting presentation by Jonathan Blow, citing various examples of earlier civilizations losing access to technologies they once had:

https://www.youtube.com/watch?v=pW-SOdj4Kkk

I think in terms of user interfaces, we are on a downward path in several important ways, and I highly recommend checking out earlier technologies that were in many ways an improvement over what we are working with today.

gumby
> For instance, it allows programmers to quickly create user interfaces by declaratively describing them, in a so called SCREEN SECTION.

By the way if it seems odd to have I/O like this in your language:* this reflects the architecture of mainframes; because the CPU was so important, I/O was managed by external devices (typically themselves a cabinet of electronics, at a time when the CPU took up two or three cabinets itself). You'd write essentially a small program describing what IO you wanted and then let the IO channel controller be busy dealing with minutiae like dealing with the tape drive motor or input from terminals.

So the language would set up data to be slapped to a terminal (this would have been a decade or more after COBOL was written, once terminals with screens were available), which the channel controller would send to the appropriate terminal. It would then deal with input and once all was ready, tell the CPU that it had a bunch of input ready.

Terminals like the 3270 were even half duplex so would process the input and then send it off as a block, to make the channel controllers more efficient!

In the ARPAnet of the 1970s, the ITS PDP-10s and -20s took this further with a protocol called SUPDUP (super duper) which allowed this kind of processing to be done on a remote machine when logging in over the net (as we would do today with ssh). So you could log into a remote machine and run EMACS, and while you were editing a remote file with a remote editor all the screen updating would be done by your local computer! Even the CADR lisp machines supported this protocol!

* At the time not doing IO through the language itself was considered oddball. I seem to recall a line in the original K&R where they described C's I/O and made an aside, '("What, I have to call a function to do I/O?")'

macintux
Almost 25 years ago I took a TCP/IP class in Chicago that turned out to be a waste of my time, because it primarily dealt with clients like ftp, telnet, etc that I was already quite familiar with.

However, the rest of the students were mainframe guys, so it was an interesting few days. My (Solaris, Linux) world seemed incomprehensible to them, and vice versa, but it was nice to get a glimpse into how the other half lived. Finding experienced computer people who’d never used ftp was quite surprising.

BiteCode_dev
Separating the IO from the rest of the program is always a good practice, that we keep rediscovering.

E.G:

- UI in React are trying to be just rendering functions with one input and one output. Even more so with hooks.

- in python, there is a new trend of providing "Sans I/O" (https://sans-io.readthedocs.io/), and async/await are not just keywords, but a generic protocol created to delegate I/O to something from outside your code.

It's interesting to see that on very old systems, even the hardware was organized that way.

derefr
Oddly, that reminds me a lot of early game consoles! You'd just set up some sprite registers, or send some commands to music sequencer hardware, and then those would go on and persistently do things on their own while the CPU got on with the "business logic" of running a game.

It's funny how it was the systems in the middle (minicomputers, like the PDP-11 where C originated) that did everything on the CPU, whereas the systems on the high end (mainframes) and low end (microcomputers) both split the work out, for different reasons: mainframes pushed IO out to independent coprocessors for multitenant IOPS parallelism (can't make any progress if the CPU gets an IO interrupt every cycle); while early microcomputers pushed IO out to independent coprocessors to retain the "feeling" of real-time responsivity in the face of an extremely weak CPU!

gumby
I never thought of a sprite chip this way but your message makes me think I should have. A sprite chip was a kind of coprocessor like an external floating point unit or a GPU.
goatinaboat
In the Amiga it was called the Copper which was an abbreviation of co-processor.
a1369209993
A sprite chip literally is a GPU [Graphics Processing Unit], just for 2d graphics (sprites,tiles,color palettes,etc) rather than 3d (vertex,triangles,fragment shading,etc).
gumby
Literally true.

I was thinking of the GPUs we had in those days which were often a couple of VME cards or even a small cardcage, but you’re right: that physical distinction isn’t really relevant.

derefr
Mind you, a PPU is specifically, in most cases, more like an iGPU (integrated GPU, like Intel's on-CPU graphics.) Most of the PPU chip designs tended to share a memory (both physically and in address-space terms) with the CPU, such that the CPU would write directly to that memory, and then the PPU would read back from that memory. (This was when the PPU had memory at all; often they were pixel-at-a-time affairs, doing just-in-time sprite-tile lookups from the ROM pointed to by their sprite-attribute registers.)

Most things we call "coprocessors", on the other hand, were a bit different: they had their own on-board or isolated-bus memory, which only they could read/write to, and so the CPU would interact with them with "commands" put on a dedicated command bus for that coprocessor. Most sound chips (down to the simplest Programmable Interval Timer, but up to fancy chips like the SNES's SPC700) were like this; as were storage controllers like memory cards and the PSX's CD drive.

DonHopkins
SUPDUP and EMACS supported "line saving", so Emacs could send a control code to tell a SUPDUP terminal to save lines of text in off-screen buffers, and restore them to the screen, so EMACS only had to send each line one time, and the SUPDUP terminal could quickly repaint the screen as you scrolled up and down through a buffer.

Here's my Apple ][ FORTH implementation of SUPDUP with line saving (%TDSAV, %TDRES), which saved lines in the expansion ram card.

https://donhopkins.com/home/archive/forth/supdup.f

ffhhj
Now in HTML5 you have to learn 3 languages to do that.
dmix
HTML and CSS are more markup languages than actual languages. You can work at varying degrees of abstraction.

There are thousands of template sites being used today with JS plugins the developer probably wouldn’t know how to write. But the minimal interface layers are sufficient to get them to work.

Stuff like Select2 and bootstraps collection of plugins cover a broad range of interactivity most people need on the internet.

zitterbewegung
What you are describing sounds like the inverse of TRAMP in modern GNU emacs.
rbanffy
Or an HTML form.
gumby
SUPDUP was far more dynamic than an HTML form (read RFCs 734 and 749).

html forms are more like the half duplex terminals of the 70s/80s.

DonHopkins
You say that like it's a bad thing! ;)

(Or do you always sound that way?)

gumby
In a way. We had that too, with a networked filesystem in the 1970s so you could simply open a remote file in ITS EMACS (or any other program). It was handled by the O/S, or rather a user space program like today’s FUSE.
larsbrinkhoff
But the bulk of the remote file system code was in user space.
m463
I remember someone complaining about unix machine in comparison to mainframes:

"It generates an interrupt every time you press a key!"

goatinaboat
VMS systems on LAT networks only received a packet/interrupt per line from the terminal, that’s how they were able to support many times more users than Unix contemporaries.
gumby
Unix can be configured this way —- it used to be the default, with # as the rubout character and @ the line delete character. It’s still in the tty driver and can be useful when programming from a teletype (tty)
goatinaboat
I learnt something today - thanks. I’ve never seen a Unix system configured like that, in over 25 years of doing this!
m463
thank goodness. I remember it was common that the login screen on a terminal would be configured for # as backspace while backspace would actually DO a backspace but be entered into the buffer.

I suspect it was a remnant of hardcopy terminals

gumby
That system was inherited from Multics, but the limited number of interactive computers in those days were primarily used from printing terminals which couldn’t actually erase a character so all of them had some such facility.
DonHopkins
At UMD, Chris Torek hacked ^T support to the 4.2 BSD tty driver, inspired by TOPS-10/TWENEX's interrupt character that displayed the system load, current running job, etc.

But the first version didn't have any de-bouncing, and would process each and every ^T it got immediately (each of which had a lot of overhead), so on a hardwired terminal you could hold the keys down and it would autorepeat really fast, bringing the entire system to its knees, while you could even watch the load go up and up and up!

And of course whenever the system got slow, everybody would naturally start typing ^T at once to see what was going on, making it even worse.

That was a Heisenbug, where the act of measuring affects what's being measured, with an exacerbating positive feedback loop. He fixed the problem by rate-limiting it to once a second.

https://en.wikipedia.org/wiki/Heisenbug

https://en.wikipedia.org/wiki/Positive_feedback

msla
> Terminals like the 3270 were even half duplex so would process the input and then send it off as a block, to make the channel controllers more efficient!

The block mode terminals (like the 3270) were/are kinda like HTML forms: The mainframe sends a form to the terminal, and the terminal has enough local smarts to know how forms work, that only some regions of the form are writable, and how to send a response back one form at a time, as opposed to the character-at-a-time terminals which Unix and VMS and ITS were built around. There's a lack of flexibility, but it allows mainframes to service tons of interactive users, for a certain definition of interactive.

The Blit terminal was the next step beyond block mode terminals, in some sense: Blits could be character-cell terminals with fundamentally the same model as the VT100, but they could also accept software in binary form and run interactive graphical programs locally. Think WASM, only with machine code instead of architecture-independent bytecode.

https://en.wikipedia.org/wiki/Blit_(computer_terminal)

> When initially switched on, the Blit looked like an ordinary textual "dumb" terminal, although taller than usual. However, after logging into a Unix host (connected to the terminal through a serial port), the host could (via special escape sequences) load software to be executed by the processor of the terminal. This software could make use of the terminal's full graphics capabilities and attached peripherals such as a computer mouse. Normally, users would load the window systems mpx (or its successor mux), which replaced the terminal's user interface by a mouse-driven windowing interface, with multiple terminal windows all multiplexed over the single available serial-line connection to the host.

> Each window initially ran a simple terminal emulator, which could be replaced by a downloaded interactive graphical application, for example a more advanced terminal emulator, an editor, or a clock application. The resulting properties were similar to those of a modern Unix windowing system; however, to avoid having user interaction slowed by the serial connection, the interactive interface and the host application ran on separate systems—an early implementation of distributed computing.

That was 8th and 9th Edition Research Unix; it was an influence on Plan 9, which took the distributed GUI computer system concept and ran with it.

> So you could log into a remote machine and run EMACS, and while you were editing a remote file with a remote editor all the screen updating would be done by your local computer!

Also, ITS had the neat feature of detaching job trees: You could login, get your own HACTRN (the hacked-up debugger ITS used as a shell), run a few other programs which would then be children of that HACTRN job, and detach the whole tree and logout. When you logged back in, you could re-attach the tree and carry on like nothing happened. It's kinda like screen or tmux.

DonHopkins
>Also, ITS had the neat feature of detaching job trees: You could login, get your own HACTRN (the hacked-up debugger ITS used as a shell), run a few other programs which would then be children of that HACTRN job, and detach the whole tree and logout. When you logged back in, you could re-attach the tree and carry on like nothing happened. It's kinda like screen or tmux.

You could also detach any particular job sub-tree, and other users could reattach it. Useful for passing a live ZORK or LISP or EMACS back and forth between different logged-in users. "Here, can you fix this please?"

There was also a :SNARF command for picking a sub-job out of a detached tree (good for snarfing just your EMACS from your old HACTRN/DDT tree left after you disconnected, and attaching it to your current DDT).

https://github.com/PDP-10/its/blob/master/src/sysen1/ddt.154...

It helped that ITS had no security whatsoever! But it had some very obscure commands, like $$^R (literally: two escapes followed by a control-R).

There was an obscure symbol that went with it called "DPSTOK" ("DePoSiT OK", presumably) that, if you set it to -1, allowed you to type $$^R to mess with other people's jobs, dynamically patch their code, etc. (The DDT top level shell had a built-in assembler/debugger, and anyone could read anybody else's job's memory, but you needed to use $$^R to enable writing).

Since ITS epitomized "security through obscurity", the magic symbol DPSTOK was never supposed to be spoken of or written down, except in the source code. But if you found and read the source code, then you passed the test, and deserved to know!

There was a trick if you wanted to set DPSTOK in your login script (which everyone could read), or if somebody was OS output spying on you (which people did all the time), and you wanted to change their prompt or patch some PDP-10 instructions into their job without them learning how to do it back to you.

The trick was to take advantage of the fact that DPSTOK happened to come right after BYERUN. So you could set BYERUN/-1, which everybody does (to run "BYE" to show a joke or funny quote when you log out), then type a line feed to go to the next address without mentioning it name, then set that to -1 anonymously.

So knowing the name and incantation of the secret symbol implied you'd actually read the DDT source code, which meant you had high moral principles, and were qualified to hate unix, which didn't let you do cool stuff like that. ;)

https://github.com/larsbrinkhoff/its-archives/blob/master/em...

    From: "Stephen E. Robbins" <[email protected]>
    Date: Thu, 21 Dec 89 13:25:56 EST
    To: CENT%[email protected]
    Subject: Where unix-haters-request is

       Date: Wed, 20 Dec 89 22:13:42 EST
       From: "Pandora B. Berman" <CENT%[email protected]>

       ....Candidates for entry into this august body must either prove their
       worth by a short rant on their pet piece of unix brain death, or produce
       witnesses of known authority to attest to their adherence to our high
       moral principles..

    Does knowing about :DDTSYM DPSTOK/-1 followed by $$^R qualify as attesting
    to adherence of high moral principles?

    - Stephen
Here are the symbols in the source:

https://github.com/PDP-10/its/blob/master/src/sysen1/ddt.154...

    BYERUN: 0 ;-1 => RUN :BYE AT LOGOUT TIME.

    DPSTOK: 0 ;-1 => $$^R OK on non-SYS jobs
Here's the $$^R handling code that slyly prints out " OP? " to pretend it didn't understand you. (In case anybody's watching!)

https://github.com/PDP-10/its/blob/master/src/sysen1/ddt.154...

    N2ACR: SKIPN SYSSW ;$$^R
          jrst n2acr0 ;  Not the system?
        SETOM SYSDPS
        jrst n2acr9

    n2acr0: skipn dpstok ;Feature enabled?
          jrst n2acr9 ;  nope
        skipe intbit(U) ;Is it foreign?
          jrst n2acr9 ;  no, either SYS (special) or our own (OK anyway!)
        movei d,%URDPS
        iorm d,urandm(u)  ;turn on winnage
    n2acr9: 7NRTYP [ASCIZ/ OP? /]
rbanffy
It's quite clever, actually. Offloading specialized work to autonomous subsystems allows the CPU to run your code better than it would if the CPU also had to deal with reading from disk or assembling packets for the network. A modern mainframe has tons of such systems and that's what allows them to process volumes of transactions that PC-based servers of the same price range can't.

In fact, our PCs are like that because they had to be cheap and the cheapest way is to burden the CPU with all work.

zozbot234
These days, the "autonomous subsystem" is a microcontroller at the other end of a USB, SATA or PCI bus. Modern network cards also need to perform most of their packet-assembly operations on device and not in the kernel, or they would never reach their advertised max bandwidth.
rbanffy
Those controllers are still designed prioritizing cost over performance most of the time.
gumby
Well it’s system cost — the controllers take the high frequency interrupts, not the cpu.

As for BOM...a few years ago we designed a big serial board (industrial control) and found it was much cheaper to buy AVR cpus and use just the onboard UARTs than to buy UART chips.

gumby
Actually I’d disagree a bit with your last sentence. Handling io interrupts in the kernel was a property of cheap machines like minicomputers that you could buy for a few hundred $k or less. You typically couldn’t afford extra channel controllers on cheap machines like that. Hence Unix’s I/O architecture was driven by the constraints of the PDP-7. Multics has a standard io controller architecture.

PCs of course had the same issue — down to the cpu controlling the speed of disk rotation! But a modern PC has an I/O system more complex than the mainframes of old, with network interfaces that do checksum processing, handling retransmission and packet reassembly etc, just DMAing the result. Disk drives, whether spinning or SSD have a simple, block interface unrelated to what the storage device is doing, etc. I think this is all as it should be, though I personally consider the Unix IO model grossly antiquated.

DonHopkins
If /dev/zero is an infinite source of zeros, then why doesn't the minor device number specify which byte to use? Then you could make character special file /dev/seven that was an infinite source of beeps! There have been so many times I needed that.
gumby
Star Wars is more advanced and had this. How do you think R2D2 wasn’t cut from that film? He had an advanced case of Tourette’s so everything had to be beeped out.

Imagine if they’d run out of beeps during filming!

rbanffy
Great idea! Also, for when you need some paper, you can pipe /dev/twelve to your printer.

We should propose a kernel patch for this next April.

Extra credit if we get it rolled into POSIX.

acdha
This had some neat benefits: in the 90s I worked for a COBOL vendor (Acucorp) who had a bytecoded portable runtime which allowed you to run a binary on systems ranging from 16-bit DOS to Windows NT, most Unix variants, VMS, etc. (our QA matrix had ~600 platforms & versions). The display section meant it could adjust to the platform: on DOS and other consoles, you had text controls but on X11, Win16/32, OS/2, and Mac it had native GUI widgets, with native validation UI. It wasn’t beautiful out of the box but it was familiar, consistent, and accessible.

The same was true of the standard indexed storage: a different runtime could use a SQL database for storage without recompiling the program, which was key to some gradual migrations to Java. A similar feature allowed remapping invoke calls to run on a remote server.

At one point we produced a NPAPI plugin version: install it in Netscape and your client ran the UI while the data access and RPC calls happened on a server. All I can say is that it seemed like a good idea at the time.

downerending
I've not used this, but it reminds me some of the UI of early Lotus 1-2-3 (in text, no GUI). It was amazingly intuitive and easy to navigate, even without a mouse.

Modern web apps, each unique and confusing, with no reliable way to move focus around and with bizarre text entry "improvements", would have made the Admiral weep.

eitland
> Modern web apps, each unique and confusing, with no reliable way to move focus around and with bizarre text entry "improvements"

I'd not vote to go back to the 90ies (we've fixed way to many security problems since then) but it would be great if we could all agree that we lost something along the way and try to get it back somehow while keeping the things that work today.

zozbot234
> Modern web apps, each unique and confusing, with no reliable way to move focus around

If accessibility is set up correctly, you can still move focus with TAB and activate widgets/buttons with SPACE or ENTER. No different from any terminal app. If client-side JS is being used, the web page could even display additional forms in response to a keyboard shortcut, with no network-introduced latency.

badsectoracula
> move focus with TAB and activate widgets/buttons with SPACE or ENTER. No different from any terminal app

No different from a terminal app that pretends to be a GUI app :-P. At the minimum many terminal/DOS apps used arrow keys to move between fields and often just pressing enter in a field (often after it was validated), it'd move to the next relevant field. Shortcut keys to move around and/or perform context sensitive functions (e.g. search for product IDs in a field where you are supposed to enter a product ID) would also be available at any time.

I remember my father making a program for his job (for his own use) ~20 years ago in Delphi and he was manually wiring all the events to move around with the Enter key, like his previous program (written in DOS and Turbo Pascal) did and was annoyed that he had to do this manually.

TBH i do not believe it is impossible to do this nowadays on the web, but just like it wasn't done in the majority of GUI desktop apps that followed the DOS days, i also do not believe it is/will be done in the web apps that is done nowadays. The reasons for this vary, but i think a large part of it is that people simply haven't experienced using the other methods to even think about them.

int_19h
It wasn't done by the apps, because the standard UX on all those platforms did not include move-focus-on-Enter - and GUI apps are generally supposed to follow the standard.

FWIW I agree that for data entry apps in particular, Enter is just too convenient - and I had to write code to manually implement it in GUI apps, as well.

downerending
Per sibling, even at best, this just isn't in the same league. And I'm doubtful about "best".

And even if it works, why should I need to enable special accessibility features to access a UI mode that isn't utter sh_t?

betenoire
Accessibility is only part of it. You should know where your keys are going to take you, and tabs often become unpredictable and confusing
wgyn
That is a great video! It also reminds me of Dan Wang's essay on How Technology Grows: https://danwang.co/how-technology-grows/. It introduced me to the Ise Grand Shrine, which is a wooden temple that's rebuilt and torn down periodically so that the knowledge of how to maintain it doesn't get lost.
dylan604
>Regarding the underlying theme of legacy systems, here is an interesting presentation by Jonathan Blow, citing various examples of earlier civilizations losing access to technologies they once had

This is why printed physical books are still an important thing in the age of eReaders and the like. Also, the Library of Congress archives audio on vinyl. The ancient analog formats will still be able to be understood in 100 years. Today's latest tech won't be though. I was just provided a CD-R/DVD-R(didn't look that close) of x-rays of my cat. I honestly have no way of reading that data as none of my devices have a x-ROM drive.

karatestomp
External DVD writer (so, also CDs) drives are about $30 now for a decent brand, powered entirely over their USB cable so no power brick to mess with, and are about half an inch thick and barely bigger than the discs that go in them. One can easily live in a junk drawer, unnoticed until needed. I was surprised to discover how tiny and low-power they are now, and how cheap.
dylan604
That's great for today. In 10 years? 20 years? Definitely worthless in 100. Even the $30 today is wasted money for me. I have friends that have antiquated tech laying around in a junk drawer that I could easily borrow if I needed. Instead, I just requested the vet attach the data on the disc in an email. Even cheaper and less tech waste involved.
xigency
That’s funny. I have a Blu-ray Drive only so I can make physical backups of some important things to store.
badsectoracula
Eh, not really a problem. Your CD-R issue is pretty much the same as you not having a cylinder phonograph player - a device that is already 100 years old. Sure, if you do not have such a player you cannot play the cylinder, but the audio stored in those players isn't lost, players exist for those who want to listen to the cylinders and given enough money, new ones can be made.

Similarly, you may not have a CD-R or DVD-R reader, but it takes about 5 euros to buy one and access your CD. In 100 years it might cost more, but i'm willing to bet that CD-R and DVD-R readers will still be available in much greater numbers than cylinder phonograph players are. Making new ones, for really important data, will certainly be more expensive than making a new cylinder player but it wont be impossible - if the data is important enough and for some reason all the millions of CD-R players in existence nowadays has vanished, a new one will be made.

And analog formats aren't that great either - most of them tend to wear over time, so they have their own drawbacks.

tbyehl
Mr Wizard taught me how to improvise a record player [1]. I doubt our great-grandchildren will be able to improvise a CD reader.

[1] https://www.youtube.com/watch?v=HJa6Ik6xmiU

tantalor
> quickly create user interfaces by declaratively describing them

> everything can be accessed using only the keyboard

Sounds like plain old html forms.

wrs
Indeed, and in fact there are systems that adapt between the two paradigms. For example, taking a system based on IBM 3270 screens and turning it into a series of web pages to enable a web portal for an existing mainframe system.
sjburt
I worked for a company that had one of these ancient (probably mainframe) systems for inventory, business analytics etc. Users at the factory would access it via 3270 terminal emulators. It was clunky but usable.

Until they wanted to stop paying for terminal emulator licenses and replaced it with a web gateway that translated the 3270 pages into HTML forms. That was pretty horrible to use and of course people began “forgetting” to record inventory moves and process steps. This was for six figure aerospace components so at the end of the day someone knew where they all were and what had been done, it just made it a lot harder to bill for milestones and wasted tons of time. Of course the software was like $15 a seat.

acdha
Plain HTML5 forms: they had complex field structures and validation which required JavaScript until we got things like the extra input types and regex patterns.
triska
Yes, at least in some respects, HTML forms could have been that way, but in practice, they are not:

For one thing, and this is important, with every browser I tried, I get periodically interrupted by messages that the browser displays and that are not part of the form. Just recently, I was again asked for security updates by the browser. Sometimes when opening a form it gets prefilled by the browser, sometimes not, and sometimes only some of the fields. Some other times I get asked whether the browser should translate the form. Sometimes it asks me whether some fields should be prefilled. Sometimes when submitting a form the browser asks me whether I would like to store the form contents for later use. Sometimes the browser asks me at unpredictable times whether I would now like to restart it.

An important prerequisite for getting a lot of work done is for the system to behave completely consistently and predictably, and not to interfere with unrelated messages, questions, alarms etc. This is also very important for training new users, writing teaching material etc.

Another important point, and that is very critical as well, is latency: Just recently, I typed text into my browser, and it stalled. Only after a few moments, the text I entered appeared.

I never had such issues with COBOL applications. Today, we can barely even imagine applications that are usable in this sense, because we have gradually accepted deterioration in user interfaces that would have been completely unacceptable, even unthinkable, just a few decades ago.

flyinghamster
The other thing about too many modern user interfaces is that they often get subjected to whatever fad is in vogue this week. Eye candy takes precedence over consistent user interfaces, and gratuitous changes are routine because a new fad has become the Next Big Thing.
tomlagier
The trade-off, of course, is access. Now, computers are cheap and available everywhere. In the COBOL era, what percentage of a family's income would it take to purchase such a machine? How wide-spread were these machines in non-English speaking areas? How easily could a program from one of the machines be used on a machine built by a different vendor? Were said programs resilient to malicious actors?

As always, there are sacrifices, but in general I think we have made rational trade-offs in this realm.

chrisweekly
Not to mention executing the program across different kinds of wired and wireless networks, and handling a huge variety of client OS, user agent, and input modalities.
triska
For instance, ACUCOBOL-GT supports several different operating systems. This means that the COBOL programs are completely portable: You can run the exact same code on different operating systems, getting the exact same results on all platforms, including user interfaces.

The COBOL programs I wrote ran on machines that had a tiny fraction of the computing power we now have in every device, including cell phones and watches.

For this reason, I expect it would have been very easy to use COBOL applications instead of HTML forms on many devices, and trading the former for the latter does not seem rational to me for many use cases I see.

hrktb
You seem to be complaining more about the browser than HTML per se. Why not use another browser ? or even a lynx like browser then ?

I think despite all of these most of us stick with modern browsers because the tradeoffs are worth it. Removing complexity and unpredictability at all costs has usually worse impacts than being annoyed and surprised. Except if your software drives a cockpit or a nuclear plant.

zozbot234
> For one thing, and this is important, with every browser I tried, I get periodically interrupted by messages that the browser displays and that are not part of the form. ...

Pressing the ESC key should dismiss these messages. It's an annoyance to be sure, but not a deal breaker. Similar for pre-filled content, you can just select and erase it.

K0SM0S
Sadly, <ESC>it doesn'<ESC>t really solve t<ESC>he problem of interrupting one's fl<ESC>ow...

At least, such behavior should be user preference— “get out of my way” is like #1 on most users list for a useful toggle.

As for notifications specifically, it's not like we haven't developed 101 notification centers to postpone user response.

The problem, imho, with qualifying this as universally "not a deal breaker" is that it's a slippery slope, all too subjective to boot with— e.g. is Windows auto-rebooting not a deal breaker either? I'm sure to some people, not really...

A showstopper has very different thresholds depending on what you do. Live tasks notably (recording, streaming, etc) should be sanctuarized (down to a "real-time" setting at the thread level, when it's not actually unstable for some ungodly reason). Chrome is just another OS nowadays, videoconference being a prime example of app. I wouldn't like Chrome to nag me during an interview for instance.

If I even have to make just 1 extra move, gesture, click, whatever even a look away from my work, because the machine demands that I do (screen or app locked 'behind' otherwise), in effect creating unsollicited gates between me and my workflow... yeah, there's a big problem. 5 minutes saved a day per user for a big corp is millions saved before a month has passed.

Now think that we're collectively, computer users, like one big human corporation. Think of the time we're losing due to bad design. Let that sink in for a minute... It's a huge and stupid cost we impose on ourselves. Must be funny to some, idk...

There is a way to a better UX. We shouldn't need ESC to get there. ;-)

K0SM0S
Of all the woes of modern UI, latency on text input is what I find most irritating. It's physical, visceral, mental, I don't know; when you've been raised by incredibly snappy video games (1980-1990s, 2D glory), it's asinine to suffer input lag in 2020 of some x86 platform. It's just alien an experience, like time suddenly reversed for userland or something.

In the browser, given the steaming pile of code that some pages have become, I may understand some lag, quirks. They broke the web and Chrome is helping so... Yeah. But in a text editor of all applications? No. I mean, just... no. (Looking at you, atom, vscode... WHY?)

Whatever the hell you're trying to search / display / message (I assume, trying to help me...): please begin by not interrupt my input feedback? That should be, I don't know, the only sane priority for UX, to actually respond to the user? Otherwise the computer feels... broken, subpar, unfit for the task? Am I alone in having these feelings when using devices?..

/rant over. I'm not that old for god's sake, 37 and they've got me jaded by UX quality already. All it took was two short decades to flush down the drain the precious good that didn't need fixing.

Ah. I'm sure we'll get back there. Eventually. Even if I have to write it myself (surely, we'll be legions). Someday I wonder what we're waiting for. And then I remember that this is the year of the Linux desktop... It's not that easy, is it?

badsectoracula
I have a feeling that many programmers and engineers are blind to latency. Not in a sense that they are bad programmers/engineers, but in a sense that they simply cannot feel the difference, so they make stuff that introduce latency they themselves cannot perceive.

So you get stuff like Wayland that introduces latency issues and when you try to explain the issue you feel like trying to describe the difference between magenta and fuchsia to a blind person.

K0SM0S
Oh my, I've never thought of it this way. I really get what you mean.

Can't agree more. That might very well be it.

Could it be trained, through exposure, e.g. with video games as I implied?

Or is it maybe related to "snapiness" of the brain?— I think it's been shown that we clearly have difference response times, some people being several times faster/slower than others (with no explicit correlation to a "level" of intelligence", more like the characteristics of different engines in terms of latency / acceleration / max speed / torque / etc).

badsectoracula
TBH i don't know. I was playing video and computer games since a very young age, so it might have to do with this but at the same time i know i wasn't paying much attention to latency until my late 20s. Even at 25 i'd be running Beryl (one of the first compositors) with its massive latency and i'd be writing games with software rendered mouse cursors, both being incredibly poor in terms of latency and i'd just not notice it.

So perhaps it can be learned?

kierank
https://danluu.com/input-lag/
torgian
I gotta agree here. UX seemed so simple just ten- twenty years ago... and still I wonder how the hell people programmed 16 bit video games in a tiny system.

And everything on the web front is so bloated. Yes, I can make a pretty UX with that, but at what cost? I think that the faster and cheaper computers that we have today have made it easier to overlook how bloated everything has become.

Upside is, it does make a lot of things easier. Downside is, everything is slower and fatter. Mostly because of new frameworks, languages, etc that all aim to accomplish something, just easier.

And I get it. I understand why we want to make it easier. But again, what are the costs?

K0SM0S
This is pure speculation on my part but I like to think that there's literally billions to save. It's about the order of magnitude: even 1 minute per year per person amounts to something like 15,000 man-days (>100 million man hours!).

1 minute per year!

zozbot234
> And everything on the web front is so bloated.

Um, you're posting this on a site that's basically the polar opposite to "bloat" on the web. Badly-engineered systems have always existed somewhere; even the COBOL-based proprietary solutions referenced in the OP are a fairly obvious example of that.

torgian
Ok, then the _majority_ seems to be bloated. And I feel like this is the new normal.

HN, in my opinion, is an island of simplicity in a sea of complexity.

Pmop
Kinda but not entirely off-topic. Low-tech magazine is all about solar power these days but previous issues brought back to light technologies we once used, and probably would be still useful these days; plus, you get to save power, lower carbon footprint, and so on.

https://www.lowtechmagazine.com/

scythe
>For instance, it allows programmers to quickly create user interfaces by declaratively describing them, in a so called SCREEN SECTION.

>The resulting interfaces are very efficient for text entry, and allow users to quickly get a lot of work done. The interaction is so much more efficient than what we are used to with current technologies that it can barely be described or thought about in today's terms.

It's not common anymore, but Unix clones still have dialog(1):

https://www.freebsd.org/cgi/man.cgi?query=dialog

apfsx
Do you have a video that shows an example of this text entry system? I'm interested in seeing this.
Zenst
In the early 80's we had terminals (Newberry being one brand iirc)that would talk to the mainframe and in this instance a Honeywell Bull DPS8 range and these terminals had block mode, so could just type out using the cursor keys to navigate and key in all your code or feild attributes as you would for a transactional screen layout input interface. Then having effectly edited and dealt with localy, could hit send into the edit buffer and covered much WYSWYG form of input and layout for screens. So many ways of doing screen editing and text entry in the real early pre PC days would of been down to the terminals and local cursor, send whole screen block mode form of editing. Unlike character mode (very early systems way of pooling terminals) in which it had to handshake each and every keystroke in effect. Though even some of these terminals could be configured to allow full screen editing mode and sent via character mode batch polling. Terminals were not cheap back then either and when the PC came out, terminal emulation software was one of the big sellers in some markets who would spend lots on the early PC's and more on this software as cheaper than the dedicated terminal offerings.
triska
Sometimes I see it on terminals for instance when checking into a hotel, opening an account in a bank, booking a flight, interacting with the tax administration etc.

One important point that makes them so efficient to use is that everything can be accessed using only the keyboard.

For my personal use, I have simulated such forms using Emacs, and especially its Widget Library:

https://www.gnu.org/software/emacs/manual/html_mono/widget.h...

This is a bit similar to what you get with a SCREEN SECTION in COBOL.

epc
Could also be ISPF dialogs (https://en.wikipedia.org/wiki/ISPF, https://www.ibm.com/support/knowledgecenter/zosbasics/com.ib...).
userbinator
Anyone who has used the BIOS setup function of a PC before the EFI bloat took over will find that interface familiar, it's certainly easy to use and quite efficient.

Here's an example screenshot: http://www.buildeasypc.com/wp-content/uploads/2011/11/step12...

adzm
I was blown away by some reporting systems in COBOL that were surprisingly understandable and simple.
wglb
The only way I found COBOL palatable was to spend the better part of a year programming in RPG III first.
icedchai
If only it were so easy to do that with modern web technologies. Instead, you write dozens (if not hundreds) of lines of HTML/CSS/JS just to do a simple form post. And it feels like it's gotten worse over the years.
dzonga
thanks for the Jonathan blow talk. very informative and good to reflect on. e.g in software younger dev's don't generally accept the advice of the older gen. hence we keep reinventing tech instead of iterating. examples been serveless vs php deployment. k8's, react etc
thereyougo
Wow, I was sure that COBOL is not being used any more since the new programming languages are just better in any aspect.

What's the reason that COBOL faded over the years?

goatinaboat
the new programming languages are just better in any aspect.

Well, you should question your assumption that the new languages are better. They aren’t, they are just more fashionable. You can see this in every aspect of life, not just programming. Previous generations liked high-quality products that would last. Modern day people like cheap objects that soon break and are thrown away and replaced with something equally cheap and flimsy.

There’s a reason that COBOL (and FORTRAN) code is still running 50 years later, and that last year’s JavaScript already needs to be rewritten.

onei
It depends how you quantify better. If you mean easier to hire developers in, provides higher levels of abstraction, etc. then modern programming languages are generally better. If you value stability and cost, as those who own these COBOL do, then you don't care about what it's running on and what you have is good enough until it suddenly isn't. 50 years is a long time to find bugs and modern applications don't have anything close to that level of hardening.
goatinaboat
you mean easier to hire developers in, provides higher levels of abstraction, etc. then modern programming languages are generally better

That’s a subtle distinction, in JavaScript a high level of abstraction means hiding the details of the DOM and how the interface works, whereas in COBOL or FORTRAN what you’re really abstracting is the problem domain itself.

Availability of programmers is part driven by the market but largely by programmers themselves who try to avoid mature and established technologies and chase the hottest new trend. So it’s true what you say, but it doesn’t happen because it has to but because people want it to.

hacker_9
Have you used imgui? Easily fastest way to create a GUI since dawn of computing, Cobol screen section really has nothing on it.
analognoise
Compared to Tkinter (or even straight Tk)?

If you want something naive that I think blows both out of the water: FreePascal/Lazarus.

okareaman
Everything Old is New Again
m463
little known fact - COBOL is very efficent - in terms of errors.

I recall it won an award for the most error messages from the fewest number of lines of code.

I think a program with a period in column 6 run through the IBM Cobol compiler would generate either 200 or 600 lines of error messages.

tynpeddler
At one of my previous jobs, I was responsible for designing and building a webapp to replace a high speed data entry system that had been written in cobol that was used to input large legal documents (several thousand lines of data in some cases). We had two objectives, data entry had to be as fast or faster than the old system and the training time on the new system had to be faster.

We blew both objectives out of the water. Training time went from 3-6 months to about 2 weeks. We had a variety of modern UI concepts like modals, drop downs and well behaved tables that dramatically simplified the work flow. We also had a full suite of user defined hotkeys, as well as a smart templating system that allowed users to redefine the screen layout on a per use case basis (the screen would be reconfigured based on what customer the user was entering data for example).

For performance, cobol typically requires a server round trip every time the screen changes. We simply cached the data client side and could paginate so quickly that it actually caused a usage problem because users were not mentally registering the page change. The initial screen load took slightly longer compared to the cobol system, but the cobol system required 40 screens for its workflow whereas our system could do everything on a single screen.

I guess my point is that modern systems are capable of much better performance and user ergonomics than cobol systems. We have a lot more flexibility in how UI's are presented that let us design really intuitive workflows. The flexibility also lets us tune our data flow patterns to maximize performance. But most modern development processes do not have this kind of maniacal focus. Systems don't perform because most product owners don't really care that much at the end of the day. Once you care enough, anything's possible.

nulbyte
> For performance, cobol typically requires a server round trip every time the screen changes.

Round trips aren't the problem. Waiting for IO is. CICS solves this by not waiting for IO. It dumps the screen and moves on. Then it becomes the terminal's responsibility to wake CICS up with an attention key. If a front-end application is stuck waiting for terminal input, it's written wrong.

earthboundkid
> Once you care enough, anything's possible.

People should bring back sigs, so that can become a meme.

nivethan
Hi, I'm curious how you designed the app? Did you use the existing application as a base and try to duplicate it in the web or was it a completely different beast? Do you have any book recommendations or even blog posts for how to upgrade legacy UI? It's a topic I'd love to dive more into and I don't hear as much success stories as I would like.
fareesh
How long did it take? What frameworks/libraries did you use?
tynpeddler
We used Angular 1. Took about 2 years, but it was just me for about half that time and we were all learning js and the web ecosystem. The new system had been prototyped a few years earlier using java swing. That really helped nail down the design requirements.
momofarm
Has this meet expected budget? Compare to continue maintain existing cobol system?
aetherspawn
In my last job we wrote a front-end that ran the COBOL and streamed the presentation information to the client over a websocket. It allowed them to use 30+mil LOC of existing apps.

The resulting webapps were made automatically responsive for the web and could be run anywhere - desktop, tablets, phones and apps could open tabs that invoked other apps for multitasking. We also added some custom commands to the COBOL so that they could invoke web based graphs, reporting, printing, pdf generation, etc. The client supported native typeahead, so the web apps behaved very much like desktop apps whereby pressing a number of shortcuts simultaneously resulted in them being played in order and overcoming any latency. This made the apps completely superior to normal web apps for their application (i.e. POS, ERP systems).

The utility that COBOL provided, coupled with a modern web based runtime written in React, was remarkable. Truly a hybrid of both the best parts. When I left, they were working on wrapping the React webapp into a React Native app.

murphy214
For "presentation information"are you saying you just stream back the normal text on the screen (for that piece of information in the COBOL app) and then parse it into some sort of API?

I have no idea about COBOL at all but I've done something like this before with a client mainframe scripting/macro language, it was not fun. Basically I had to hard code a bunch of key inputs to get the information screen I needed finally read that screen back out in plain text and parsing that into some sort of structure. It was a mess but worked for what it required at the time.

aetherspawn
Things like buttons and windows are streamed across.
This issue highlights one of my main fears about a pandemic such as COVID-19: if enough people with the necessary amount of knowledge to maintain necessary infrastructure die without sufficiently and timely trained replacements, then civilization as we know it becomes one more step closer towards total irrecoverable collapse.

The prevalence of COBOL and other older programming languages in many parts of the world's critical infrastructure (unemployment claims here, finance, government, defense) means that the average age of someone maintaining these systems tends older. Older people tend to have more health issues. From the summaries of reports I read about COVID-19, the majority of the deaths happen among the elderly and those with other health conditions.

The idea of civilization collapsing might seem fanciful and farfetched, but the idea struck me after watching a video that was submitted here on HN and the videos it references [0][1][2][3].

[0] Jonathan Blow - Preventing the Collapse of Civilization (English only) - https://www.youtube.com/watch?v=pW-SOdj4Kkk

[1] Preventing the Collapse of Civilization [video] - https://news.ycombinator.com/item?id=19945452

[2] https://www.youtube.com/watch?v=OiNmTVThNEY - https://www.youtube.com/watch?v=OiNmTVThNEY

[3] Eric Cline | 1177 BC: The Year Civilization Collapsed - https://www.youtube.com/watch?v=hyry8mgXiTk

AmericanChopper
For all of these large, old organisations that still use COBOL, the risk isn’t that they run out of COBOL engineers to hire. COBOL is a very simple programming language, it was designed to be as simple as SQL. SQL was designed to be so easy that non-programmer business analysts could use it, COBOL was designed to be the same thing, but for non-DB business logic. Though you could debate how successful either of them were.

The risk these organisations face is finding people capable of maintaining their particular decades old COBOL spaghetti. Which is really just an ordinary key person risk.

throw_m239339
The irony for me is that SQL is the language that stood up the test of time the most. All other are/were gamble.

Obviously it's not fit for all domains, since it's a DSL for data management, but I'm often tempted to push as much business logic into the DB with PL as I'm allowed to instead of having to do things in the application code itself, and it paid off many times, as I have rarely seen a project moving from a database to another, while application code is often rewritten in many different languages for no other reason than management's whims. I probably saved some businesses millions of dollars in code porting/rewriting/migration.

> The risk these organisations face is finding people capable of maintaining their particular decades old COBOL spaghetti. Which is really just an ordinary key person risk.

In my experience, the biggest issue is the complete lack of code documentation, as organisations relies on a handful of developers to maintain their codebases and when these developers retire, suddenly, nobody can understand an architecture.

scarface74
I actively avoid companies that put all of their logic in the DB. It’s harder to version control, harder to unit test, much more work to rollback. You can’t keep it in sync with the source code etc.

I can’t just switch branches and know the state of the system at the time the branch was created.

throw_m239339
You're making good points.

First, I'm talking about a PL "middleware" layer than does not necessarily involves restructuring the data itself (basically, instead of querying tables or views, you query stored procedures or functions).

When it comes to version control, wouldn't a migration system suffice to assess the state of that layer?

As for unit testing, I kind of agree, it's possible though, like any other programming language.

scarface74
That doesn’t help. You can’t imagine the times I see

  Get_Customer_1
  Get_Customer_2
  Get_Customer_3
As far as version control, if someone does make a change to the unversioned “Get_Customer” and I want to switch branches. How do I make sure all of the stored procedures are in sync?
AmericanChopper
This is pretty much my experience with stored proc codebases too. I do consider SQL to be a very simple language, but store proc codebases tend to become incredibly complex. Finding an engineer that can maintain SQL code is very easy. Finding one that can maintain a particular maze of 1000+ stored procedures, views, functions, table triggers... is a completely different question.
Zenst
Many generational skills die out. Your grandparents would make their own butter, bread, many things today that are not so common. Then more, many skills lost over time and fact we are still working out some historical things got created, you can imagine rosetta code sites being a good thing to keep alive for the future. http://rosettacode.org/wiki/Rosetta_Code
mirimir
> Your grandparents would make their own butter, bread, many things today that are not so common.

I've never made butter, but I have used raw milk. And I've failed at whipping cream, by overdoing it. So from what I've seen, "making butter" is basically just agitating cream until the fat clumps, and then pressing out the whey.

Making bread, I admit, takes more skill.

My mother knew how to make blood pudding. But that's not something I miss very much.

bcrosby95
The difference you're missing is we're still relying on COBOL, but we aren't relying on grandparents to make the butter and bread we eat lest we starve.
Zenst
If you think of the butter as software running upon a compter and the process of manufacture the programming language, then you start to see it differently.

Though my main point I was making is that skills get lost over time for various reasons and how later you try to rediscover those skills for various reasons. I dare say that being able to make butter would be handy in some situations, imagine being isolated and access to a cow milk easily and your local shop not having any butter in stock for a while.

Grimm1
Not ideal, but it's not like it couldn't be migrated to another language and COBOL docs are readily available to at least understand the language. We would find a way.
Zenst
I've worked for a software migration company and for some languages and the bulk of code you can automate the migration. However you also need to migrate the data and then anything that touches that data, so whilst you may want to migrate one program, that program will be a suit of programs and systems making it a more complicated task than it appears on the surface.

If you can data warehouse aspects off, that's great and can slowly bite away, migrate chunks bit by bit. Though one of the last will often be the database itself. Report generation is often an area that will hit resources hard and can also be easiest to migrate away towards something better using a system that periodically syncs the database onto another platform with all the latest do it yourself easy accessible tools. That allows those wanting the reports to be able to make and change that themselves removing a lot of code development and main system overhead.

With that, knowing the language is not even half the battle, the business knowledge would be the lion's share and to integrate the two skills effectively, that's where the money is.

You could have a payroll system, migrate line by line, logic by logic from COBOL to C. Yet the way the data is handled, the nuances of the way the languages round numbers, store data, formats, handle exceptions or early termination. It's not just a case of setting the right flags in the compiler and job done.

But yes, there is always a way. As always, the devil is in the detail.

Kye
One solution I saw somewhere here on HN the last time it came up was to carefully virtualize the existing systems so they have snapshots to restore from when things go wrong. That opens the possibility of moving the data to more modern systems with a translation layer between the 40 year old systems and the new systems that slowly replace them.
synack
Given enough time I'm sure we can figure it out. The problem is that they're not looking for archaeologists, they're looking for people who lived among the dinosaurs.
pjc50
Like Gibson's remark about the future already being here but not evenly distributed, the collapse is already happening but unevenly distributed.

Puerto Rico was without electricity for months. Some parts of the West will recover from the economic effect of the pandemic quickly. Others won't, without help from the center.

selimthegrim
How is that different from say 1918? Centralization?
cptskippy
> then civilization as we know it becomes one more step closer towards total irrecoverable collapse.

I love how pop culture always portrays the collapse as coming from our inability to maintain or repair some mythical machine or technology. Never once did I imagine that it was the state welfare software.

vegetablepotpie
Jonathan Blow has a point that we cannot assume things will keep getting better. Although he says that we cannot fix software because we have chronically not done it for decades, his thesis is that the solution is to make software simpler. The issue I have with making things simpler is that if we don’t know how to fix things because we haven’t done it for decades, how can we make things simpler, when we haven’t done THAT for decades either? His argument rejects human exceptionalism, while also relying on it for his solution.

My point is that this absolutely is a funding and priority issue. I work as a developer who maintains legacy FORTRAN code. It was written in an era where global variables and Go To statements were the norm. Everyone who wrote it is retired or dead. It’s a pain to work with and there are parts that I haven’t yet gathered the courage to go in and change. However, me and my team have made substantive changes to it that are robust, that we have rigorously tested. This is not impossible, but it’s also not cheap or fast. It took me a year to understand how this blob of code works.

My team is young, we’re in our 20s, with backgrounds in CS, mechanical engineering, and physics. Certainly all systems are different, but if we can understand old FORTRAN, similar people can understand old COBOL. If people made it, people can understand it. We also know how to fix many broken things and how to fix many bugs. Often were told by management “not right now” and “we don’t have funding for that.” It’s frustrating, but that’s how the world works. They at least have funding to pay us to do what we’re doing now.

The point is that you can fix legacy systems, you can hire new people to do it, but it is expensive and it takes time. The whole issue is not whether we can or can’t, we can. The issue is: who can pay for it and will they?

pm90
This is a very hard problem. Not a silver bullet but I’ve often found that metrics are the best way (again not guaranteed) to get the money (wo)men on board. If you can demonstrate concretely how the legacy code affects your operational readiness or agility, it might make it a better sell to invest in refactoring. However the lack of standard tools to do this is a problem.

The problem with code is that most non technical companies don’t have a clue how this thing works. They use analogies. And for most folks the analogy is that of a machine that once built never breaks down. If you can demonstrate viscerally how bad the system is, it can help get better support.

This is also the idea of Jonathan Blow's "Preventing the Collapse of Civilization" talk https://www.youtube.com/watch?v=pW-SOdj4Kkk
Reminds me of Jonathan Blow's observation that knowledge doesn't necessarily make it to the next generation and progress is sometimes lost.

https://youtu.be/pW-SOdj4Kkk

I am not so sure. This is a handwavy advise I always hear from contemporary business wizbangs, but I don't think it is objectively or universally true. Pedantic, careful and attention to detail, perfectionist products that just work without things breaking all the time are a pleasure to use from the standpoint of the customer. Figure out the implementation details, funding, motivation, etc. as an entrepreneur - shipping broken products to paying customers is something that I only see in the software industry. You don't buy a vacuum cleaner and expect to send half of the missing screws to the customer 6 months later. Hell, just 20 years ago when you bought your SEGA game cassette, it shipped as a final product. Not to be patched ever. Once its out to the factory, if you found a bug, tough luck. Yes, this analogy has its limits, but silicon valley mentality of this startup hustle is questionable.

I recommend watching this incredible talk (it's about software) : https://www.youtube.com/watch?v=pW-SOdj4Kkk

mytailorisrich
I think it isn't quality that should be sacrificed in order to ship but scope.

This is the idea at play with MVPs. Don't try to produce something with all the bells and whistles, reduce scope as much as possible, polish that, ship.

eekay
This ... it's like:

features != quality, look & feel != quality

Good working software, which works and provides value for its users is what it's about. Evaluating if it provides value for others is done best by letting them use it.

eekay
Physical stuff != code...

It isn't about having functionality served half-baked but about having minimalistic functionality (only serve what brings the most value for the users to their table) and go from there.

It is irrational to think that you know exactly what people want/is going to provide them with enough value to keep coming back to your service/product...

So why not start small by building the basics, releasing them, getting feedback and use that to move your product in sync with what the users want/need.

I loved the sega games as much as the next person, and I truly admire how they got so much shipped with such limitations.

But this indicates one big problem of makers nowadays: building functionality has become easier and easier with all the frameworks/tools/no-coding/... stuff that it has become too easy to ship too much (or lose yourself in keeping working on adding just a little feature here and there), thus straying away from shipping and getting that golden feedback from actual (paying) users.

I'll be checking out the video later, thanks for sharing and your feedback!

spectramax
> Physical stuff != code...

Think more abstractly. A business is delivering piece of IP to the customer in exchange for monetary value. The medium of this IP can be physical or intangible. It is the delivery latency that's different. If those screws that I talked about arrived in about 45ms latency from the warehouse to the customer, and fitted themselves in another 2mins, you could patch things instantly. Just because this latency is different, doesn't mean you ship MVPs, broken pieces of software that require constant patching. Take time, carefully think about the architecture of the software, think about extensibility and modularity. Avoid feature creep, instead work on just works aspect of the product. This is why I love software from companies like Sublime HQ. They make fast, efficient, well architected software that's a pleasure to use. Similarly, there is something beautiful about unix utilities, little pieces of software that just work :) Continuous delivery is great for huge software, Arch Linux, for example.

eekay
I totally agree that setting up a product that is extensible, stable, scalable, etc. is something that needs to be done.

As I see it, and how I'm currently executing, is that the focus on extensibility and modularity increases over time. I look at making that extensible software as something that starts in an MVP with a focus on 1K users max, and grows and becomes more important from that point on. So it is something you can do a while after the initial launch.

> Just because this latency is different, doesn't mean you ship MVPs, broken pieces of software that require constant patching.

I agree that the MVP should be working properly. No doubt there. But it should also focus on testing if it provides value (and see if it actually is viable?). By capping it at 1K users to start with you can focus on the value part instead of spending your time on tech implementation at first.

Especially for bootstrapping solo founders, I strongly believe that the focus should be on the value of the product in the beginning, not in the setup of the technical solution.

I'm going to use my own case as an example:

As a single developer I'm working on a product idea (sharing your videos via enhanced QR codes that enable Augmented video watching without the viewer having to install an app).

I could set up the infra, cloud services, build extensible pretty code and do the works in getting this ready to shine and go for 10K users.

But I chose not to. Right now, I'm focussing on the proof of concept (which is for showing ME that the idea is feasible and usable in the ways that I want it to work).

The PoC will only have the concept of working roughly and the code will be a lot of hacked-together code that makes sense to me. This isn't an MVP as I won't have user accounts, authentication, security, payments, .. implemented. Just the core functionality.

I'm going to share the product with a bunch of people in my network to get initial feedback about it and I'm going to use it myself.

I probably will use footage of using it, along with an explanation of the idea for a landing page.

I'll know within a week or two if the idea is good enough to keep using it myself and I'll have my network feedback along with things I didn't consider that need to be in check.

When the lights are green, I'll start a new solution with clean code and the core functionality from the PoC implemented so its clean, expandable, maintainable, secure, tested, etc.

I'll have the minimum feature set around it (security, accounts, support channel, payments, ...) to make it a whole experience.

For the MVP my focus lies on reaching 10, 100, and then hopefully 1K users. And I will have a tonne of work laying ahead of fixing stuff, giving support and helping out.

I'll also have metrics about bandwidth usage, unforeseen scenarios, people who are working my product and stretching its limits in ways I didn't even consider.

And _that right there_, that moment after I hopefully have a bunch of users, is where I would make sure the amount of users is capped, and where I start to invest time on making this more scalable, improving the app experience, see If there need to be changes of the services used I'll use in the background (behind the API), etc.

There will be a lot I've learned by this point. And if the product has proven to be viable (I gain enough revenue from it to justify working on it), I'll be working hard to make it even more secure, stable, scalable, and future proof.

While writing this, I feel like we aren't that different in how we look at things. Perhaps its semantics, perhaps I just need an extra cup of coffee.

I think that the focus on CI/CD, scalability, modularity starts with the MVP and increases from there.

I know there are quite some products that ran a nice MVP and reworked their backend from scratch on after some time to make things more flexible and elastic to take on more users.

My mantra:

Code Hard, Ship Harder

Of course that's true - we probably don't need to start re-teaching the abacus... but I'm careful about getting too complacent. When everything has been going well for a long time it's tempting to think this will continue forever. But one, say, global pandemic can bring everything down - and some of our modern conveniences can be gone... possibly for centuries.

An interesting (programming) talk by Jonathan Blow on the subject of losing knowledge! https://www.youtube.com/watch?v=pW-SOdj4Kkk

jstummbillig
Well, yes, a big enough global pandemic can derail a lot of our current systems. That is exactly where it gets problematic to me: To what extent is what kind of crisis a reasonable scenario that has to be taken into consideration? Is it reasonable to widely prepare people for a civilization ending pandemic? A meteor impact? And to what extent?

The potential costs are limitless (both in money and in lifetime because you can never be too prepared), the potential benefit is absolute, the risk (when assessing by statistical impact on total human lifes so far) very low. How do you possibly weigh such things?

blattimwind
All-digital communication systems are notorious for failing in natural or otherwise disasters; that's the reason why radios, whether used by official agencies or hams or both, are very relevant in those circumstances. Hence this isn't really a discussion in my opinion.
lozf
> we probably don't need to start re-teaching the abacus...

I get where you're going but concerning the abacus, I respectfully disagree.

I'm guessing you haven't seen the way some young Asian kids start using a Japanese abacus (Soroban), and after a few years have built a mental model of it enabling them to perform mental calculations -- faster than with a calculator.

Most of the good videos on YouTube seem to have disappeared, but some short snippets remain.

[0]: https://en.wikipedia.org/wiki/Mental_abacus

adrianN
But when do people need to do more than a handful of mental calculations? While you might be faster with a mental abacus than with a calculator, I doubt that you beat an Excel spreadsheet. If Excel is unavailable for some reason either the source data you would need is unavailable too, or your mental calculation speed is not fast enough to be useful, even with the mental abacus.
syshum
I was recently in a line at the store where a person paid using cash, a rare thing these days. The Cashier, mid 20's person, could not even tabulate the total amount handed to them in their mind to enter that total in the computer to get the change amount. Ended up handing it back to the customer using "how much is there"

I wept for the state of public education, basic addition is lost skill it seems

throw0101a
> I wept for the state of public education, basic addition is lost skill it seems

What does this have to do with "public education"?

They could have been taught it in elementary school, but if they were in their mid-20s, then it would probably have been over a decade since they were tested on it even in secondary/high school.

And that skill would have atrophied with non-use on a day-to-day basis.

syshum
basic addition is not a skill that atrophies. We are talking the most basic math there is, adding whole numbers. 1+1 = 2, or in this case 50+20+1+1+1+5 = 78
throw0101a
> basic addition is not a skill that atrophies.

The evidence you provided suggests otherwise. :)

I'm sure the person can still add, but it's a bit slow(er) without practice, they got frustrated, and so handed back the change.

Someone posted this talk by Johnathan Blow on a story a week or two ago and it seems worth sharing here:

https://www.youtube.com/watch?v=pW-SOdj4Kkk

You might find a talk[0] from Jon Blow on this very subject from last year interesting.

[0] https://youtu.be/pW-SOdj4Kkk

zemo
Jonathan Blow publicly criticizes Twitter engineering for being unproductive because they haven't released many user-facing features. He regularly berates the entire web industry without knowing anything about how it works.

His knowledge is limited to an extremely tiny domain of programming, and he regularly berates people for not following his philosophy. Meanwhile, it took him eight years to make The Witness. What did he build with his eight years time? A first person puzzle game with essentially no interactive physics, no character animations, and no NPC's to interact with. (I actually enjoy The Witness, fwiw.) The vast majority of developers do not have the budget to spend eight years on a single title, and wouldn't want to even if they did.

The most notable thing about Jonathan Blow is how condescending he is.

mrspeaker
If that's what you take away from Jonathan Blow then you need to detach from your emotions a bit. He annoys me with how much his condescending attitude towards web development (because I'm a web developer) - but he justifies his positions and is open to argument about it. His talks (like that one linked above) are really inspirational, and he's released two highly successful product: two more than the majority of people you'll ever meet.

He's passionate and smart and interesting - and writing him off like that, I think, is not justified.

zemo
I thought much more highly of him when I didn't work in games and I was a web developer. Now that I work in games I don't think very highly of him. He's an uncompromising fundamentalist that sends would-be game developers down impossibly unproductive paths of worrying about minutia that will never matter to their projects before they've ever built their first game. He's famous for being a rude guest in both academia and in conferences. He's basically the equivalent of someone that says that you should NEVER use node because node_modules gets big and if you're writing a server it should be in Rust and if it isn't you're bad and you should feel bad. His worldview does not at all take into account the differing problems faced by differing people, having differing access to resources and differing budgets and time constraints. He is _only_ concerned with how you should work if you have no deadlines, an essentially unlimited runway, and your most important concern is resource efficiency. For most games projects, the production bottleneck is not CPU or GPU efficiency: it's the cost of art production. What he has to say is, for the vast, vast majority of people, not applicable. He is, essentially, an apex predator of confirmation bias.

The thing about Jonathan Blow is that for a lot of people, he's the first graphics-focused outspoken programmer that they run into and so they think he's some form of singular genius. He isn't.

There is a related Jonathan Blow talk titled "Preventing the Collapse of Civilization" [0] that I came across recently from a HN post. You might be interested in watching it. Also, Casey Muratori's talk titled "The Thirty Million Line Problem" [1].

[0]: https://youtube.com/watch?v=pW-SOdj4Kkk

[1]: https://youtube.com/watch?v=kZRE7HIO3vk

Johnathan Blow did a really interesting talk about this topic:

https://www.youtube.com/watch?v=pW-SOdj4Kkk

His point is basically that there have been times in history where the people who were the creative force behind our technology die off without transferring that knowledge to someone else, and we're left running on inertia for a while before things really start to regress, and there are signs that we may be going through that kind of moment right now.

I can't verify these claims, but it's an interesting thing to think about.

Angelore
This is an interesting talk, thank you. What frightens me, is that the same process could be happening in other fields, for example, medicine. I really hope we won't forget how to create antibiotics one day.
See also: in Good times create weak men [0], the author explains his interpretation as to why. I can't summarize it well. It's centered around a Jonathan Blow talk [1] Preventing the collapse of civilization.

[0] https://tonsky.me/blog/good-times-weak-men/

[1] https://www.youtube.com/watch?v=pW-SOdj4Kkk

guitarbill
I watched that talk a while ago. It is great, and it did change my opinion on a few things. Whether you agree with the premise or not, you can still learn something. For me, the importance of sharing knowledge within a team to prevent "knowledge rot". "Generations" in a team are much more rapid than the general population/civilisation, so that effect is magnified IMO.
The Soul of Erlang and Elixir, by Sasa Juric

video: https://www.youtube.com/watch?v=JvBT4XBdoUE

HN discussion: https://news.ycombinator.com/item?id=20942767

Preventing the Collapse of Civilization, by Jonathan Blow

video: https://www.youtube.com/watch?v=pW-SOdj4Kkk

HN discussion: https://news.ycombinator.com/item?id=19945452

srik
While on that topic, I also enjoyed this talk by Fred Herbert recently — Operable Erlang and Elixir. https://youtu.be/OR2Gc6_Le2U
anderspitman
Jonathan Blow's talk was the first one I thought of.
tyzerdak
Shit talk. I find it's funny to look at dude that say 1 hour 1 sentence but can't say clear about alternative / solving this.
anderspitman
Ok
muhic
+1 for both talks

Saša Jurić is fantastic at condensing lots of information in a 1 hour talk without losing the audience, he gave another great talk this year called Parsing from first principles (https://www.youtube.com/watch?v=xNzoerDljjo).

I'm all for standardizing this but I can't hear about the Language Server Protocol without thinking about this rant: https://youtu.be/pW-SOdj4Kkk?t=2549

The rant itself is probably misdirected towards LSP when it should be directed to the circumstances that it has to deal with.

didibus
I started using Emacs about a year ago and I love it!!

The thing with Emacs is that you need to embrace Elisp, and the workflow of working inside a REPL. If you're someone who like to have things your way, and who are more comfortable with a programming language then a UI, Emacs is for you!

I personally use Spacemacs in Holy mode with Ivy and my own customization and some custom layers. I find this to be a great combination.

I use it for Clojure development with Cider and Java development with lsp-java. And for editing most files, like configs, scripts, bash files, json files, etc. I also use it for notes in org-mode.

The only things I rely on a different editor is Vim and less for dealing with large files like log files, and Notepad++ for when I need to do fancy search and replace stuff especially if the file is large.

taeric
Thanks for that link. I confess I have my own prejudices against LSP. This view just strengthened it some.

Specifically, I feel that there is an anti-fragility to agreeing that we should all be able to write a program that can tell us about the programs we are writing. Getting languages that make this so that we all have to use a central server seems to put us at the mercy of who wrote that server. When it is that same group that wrote the compiler, it seems more likely that they will have the same blind spots.

didibus
Interesting angle.

That said, LSP is more supposed to be thought as the backend to the editor, not just a library. And the problem it solves are twofold:

1. Having to rewrite the same logic in all the programming languages used by all editors. Since as far as know, there are no language agnostic libraries, but a server is.

2. A strong boundary protects from accidental coupling of the UI/UX and the business logic. Without this, maybe at first you'd think you use some part of some other editor in yours until slowly over time it becomes more and more coupled to that particular editor's UX/UI. Even LSP actually suffers a little from that in that the APIs are totally designed with VSCode in mind first, but that stronger boundary protects it getting even worse.

> What's depressing about the current world of software?

For example the insanity that it is impossible to understand what actually happens below the surface. The ideal that I have in the back of my mind is that a highly gifted person who can afford to spend a few years of intensive study can really understand each individual line/byte of code that gets executed on the computer (i.e. all of the software that runs on it) completely.

---

An interesting lecture by Jonathan Blow:

Jonathan Blow - Preventing the Collapse of Civilization

> https://www.youtube.com/watch?v=pW-SOdj4Kkk

---

Also read some texts of Alan Kay's STEPS project:

For a well-readable overview, consider https://blog.regehr.org/archives/663

Some progress reports:

> http://www.vpri.org/pdf/tr2007008_steps.pdf

> http://www.vpri.org/pdf/tr2011004_steps11.pdf

Lots of additional papers about STEPS (and other topics): http://www.vpri.org/writings.php

(I want to make it clear that I consider what came out of the STEPS project to be very disappointing :-( ).

kharak
I had this sentiment quit often in the beginning of my career. Now, I simply realize that my lack of understanding crosses every single piece of technology there is. Fridge? No idea. Cars? Well, I can drive it almost as well as I can open a fridge, everything else is magic. How about bicycle? That's easy, right? Well, I challenge you to just now sit down and write a detailed schematic, describing everything that is needed to make a bicycle what it is. Extra points if you can explain why driving on two wheels actually works out.

I know nothing about anything and almost don´t care anymore. The last bit of wanting to understand is bathed the absurdity of it all.

Thanks for the links, sounds interesting.

cr0sh
I'll take up the challenge:

Fridge - well, the standard compressor powered refrigeration system is essentially a heat moving system. What we are doing is moving heat from the inside of the system (which is insulated and mostly "isolated" from the outside of the system) to the outside of the system. This is why you can't cool your house by opening your refrigerator's door, because the heat from the inside of the fridge is moved to the "outside" (the room it is in) - eventually, it would reach equilibrium (and the compressor would probably be overheated). So anyhow, how does this work?

Well - basically by compression and expansion.

A working fluid (the refrigerant - usually a gas in the low-pressure state, and a fluid in the high pressure state) is compressed using a compressor. This turns the gas into its fluid state, and also heats up the gas. It is moved (via the compressor) thru coils on the inside (insulated) of the system, where it is allowed to expand.

Note that in this system there is also a series of check valves and such to prevent the fluid and gas from "moving backwards" in the system; there are also stages where the gas and fluid exist at the same time (like a fizzy drink if you could see it).

During this expansion, it absorbs heat and it also gains volume (turns from a fluid back into a gas). It continues to move (with it's heat content) from the insulated side of the system (inside) to the outside of the system, where it goes thru other coils (radiator), usually with a fan or other cooling system blowing over them to remove the excess heat from them, before the gas goes back to the compressor to be compressed and turned into a fluid again - to begin the cycle anew.

That's the basics of how a refrigerator work. Now - usually, if there's a freezer section, the freezer is where the heat exchange really happens, and cold air from the freezer is periodically circulated from the freezer to the refrigerator portion to keep that side at a cooler temperature.

Air conditioners work the same way - except the "inside" is the house and the outside is...the outdoors. Heat pumps can run in reverse, so to warm your house, heat is moved from the "outside" to the inside of the house, by the very same process (even when it is "freezing" outside - there is still a ton of heat energy available).

This cycle can also be done with heat alone, provided you have a working fluid (refrigerant) which is liquid at the "outside" temperature (under whatever pressure the liquid is at); add a check valve, heat the liquid up, it will expand and turn into a gas, feed it into the insulated part of the system to absorb more heat and then into some outside coils to be cooled down and turned back into a liquid. You can make a fridge where the refrigerant is gasoline (petrol) if you so wanted to. Ammonia is another alternative. LPG can also be used like this, too.

That's basically how a propane or solar powered refrigerator works (I won't go into how dark-sky refrigeration works, suffice to say it is another form of "solar" refrigeration, more of a "backwards" method).

Car? Well - a four-cycle engine is basically this: suck (intake), squeeze (compression), bang (ignition/power), blow (exhaust). Also, for an engine to run, you need fuel, spark, and air - miss any of those (or wrong timing or proportions) things won't work. I won't go into further detail - I really could, I'm sure you can see.

Suffice to say - I could also describe that bicycle very exactly, how it works, how it is steered (the whole "turn in the opposite direction of the lean" thing...), etc.

I've got a ton of this crap shoved into my head; I relish learning new stuff all the time, no matter what it is. Sometimes (most of the time) I don't absorb it all in one shot. But I usually retain enough of it to be able to understand more a second and third time around. Some things I will probably never fully understand (higher math is my bane, though I try - also, I'll probably never understand chemistry or biology on anything more than a superficial level), but that doesn't keep them from my interest.

Why am I like this? Not sure, I've always been a very curious and inquisitive person since I was a child. I've found that having these tidbits and more of knowledge and such socked away has helped me make connections and analogies in other areas, to solve problems - and sometimes raise other questions, which just leads me down another rabbit hole at times.

As you can tell - take me to a library and I could easily get lost for hours. Don't get me started on any of the warrens available on the internet (or the internet itself for that matter)...

kharak
Just finished the Youtube talk on degrading technology / knowledge. Really liked that perspective, although I disagree with quit a bit of Jonathan Blows points.

About the challenge: I had the feeling, that you and a good chunk of readers here could go explain good deal of it all. But my point goes further: We all hit the wall of understanding sooner or later and all what is truly required is to know how use the technology. And that's exactly the same in information technology. I use databases daily. Do I know how they work? A bit, just enough for work and that is all what is needed. That's why I like to disagree with Jonathan Blow. Some people abstracted a great deal of complexity away, because it's simply irrelevant. As a result, we've seen an explosion in tech, just not in the nieces where he's looking. The modern ecosystem in tech is the normal way of economics, where everyone specializes him or herself further and further, understanding the specific domain better at the cost of everything else. Today, the average Joe can code as a result of all this simplifications. But Jonathan Blow compares the elites of coding, people who build engines, with Joe. That's misunderstanding what has happened, the level of analysis is wrong. Also, people like John Carmack are not gone, they simply work on new problems.

And as you mentioned yourself, there is so much technology surrounding you that you have no idea of how it works, like chemistry. Still, you use it to your benefit because other people who know this stuff simplified its use. And not just for you and me, also for other chemist, like coder for other coders.

Now a bit of a rand, skip it if you like: The school system, including university, killed so much of my curiosity, it's insane. I had to learn so much bullshit that I'm almost glad for one of my least great attributes, my bad memory. Because there was really no point remembering most of it. It's just too bad that I can't control my own garbage collector. Last year, I learned a great deal about machine learning (again, as I did it in college, too) and now can remember almost nothing. Despite really enjoying the course and throwing myself into it. What I don't use, I forget and of course I use only a tiny bit of skills and knowledge at work. I'm not happy with the way it is, but it simply is. I also learned so much about databases and liked that subject, but the only thing I really retain is what I use in my work.

That brings me to my final thought tidbit: My brain obsesses with ideas that you would put more in the category of philosophy. It's what you think about when you have nothing to do or start daydreaming that's probably where your brain is best at. I never asked people in tech what they think about in those moments. I'm really, really curious about the answer.

wolfgke
> My brain obsesses with ideas that you would put more in the category of philosophy. It's what you think about when you have nothing to do or start daydreaming that's probably where your brain is best at. I never asked people in tech what they think about in those moments. I'm really, really curious about the answer.

My experience with these is: I often think that I have some deep mathematical insight. The problem with these insights is that they "partly right" - they are often based on a good mathematical intuition of mine. But unluckily the mathematical research typically has come a lot farther than my insight - i.e. my insight is actually a really old hat in mathematical research.

So instead of daydreaming about some clever mathematical insight of yours, better cram advanced math textbooks - they will teach you a lot more than what you will ever come up with by daydreaming.

kharak
For sure. Study the literature of your subject. The point is, what do you think about automatically when you don't focus on a specific task? My expectation is, that people who are great at what they do continue to think and daydream about their work. When I read about philosophy (or related subjects), I'm continuing to think about it for days or weeks. That doesn't happen with tech, although that's my daily business.
> I'd expect the bar to preserve any specific content to be a lot lower than what we had in the past.

At the moment, I agree.

However, as older civilizations rose and fell, their knowledge lived on in physical support, directly accessible by anyone who was taught how to read the language, therefore the assumption was that "as long as the language lives on, books are a good archiving support".

When our modern digital civilization falls [1], I'm not sure most of it would be transmittable through long-term electronic archiving supports that may be easily readable (as easily as a written book at least) by future civilizations, because not only the language has to live on, but the technology needed to access the content also has to function.

I also agree that some contents will always be more relevant to "backup". As for everything that happened on Facebook (same can be said for most social platforms), it is likely to entirely disappear with the company. One could even see it as a parallel civilization, that has no means of self-preservation other than that of their ruling entity's business interest.

[1] Jonathan Blow - Preventing the collapse of civilization - https://www.youtube.com/watch?v=pW-SOdj4Kkk

Aug 20, 2019 · 3 points, 0 comments · submitted by tosh
Relevant (arguably overblown) talk by Jonathan Blow: https://www.youtube.com/watch?v=pW-SOdj4Kkk
shurcooL
Thank you for sharing, I really enjoyed this talk. I somehow missed it when it happened.
_carl_jung
No problem, I loved it too.
Jul 28, 2019 · akkartik on Antikythera Mechanism
Jonathan Blow draws some interesting implications of discoveries like this one in a recent talk: https://www.youtube.com/watch?v=pW-SOdj4Kkk
This is part of the theme of Johnathan Blowes talk here was worth watching https://m.youtube.com/watch?v=pW-SOdj4Kkk
Jul 11, 2019 · Qwertystop on Twitter was down
Relating to 1: https://www.youtube.com/watch?v=pW-SOdj4Kkk (Jonathan Blow's "Preventing the Collapse of Civilization"... perhaps a melodramatic title, but well-said overall.)
If you asked me what the 4 best documents regarding bootstrapping are i'd say:

* Egg of the Phoenix (Blog post) - http://canonical.org/~kragen/eotf/

* The Cuniform Tablets of 2015 (Blue-sky academic research) - http://www.vpri.org/pdf/tr2015004_cuneiform.pdf

* Preventing The Collapse of Civilization (Video) - https://www.youtube.com/watch?v=pW-SOdj4Kkk

* Coding Machines (scifi story about trusting-trust attack) - https://www.teamten.com/lawrence/writings/coding-machines/

kragen
This is awesome! Thank you! And thank you for the flattering reference to my own thought experiment there.
Jun 05, 2019 · 3 points, 0 comments · submitted by bevinahally
This reminds me of recent talk by Jonathan Blow [1], where he talks about how we've made very little progress in the field of software and anything that appears to be progress is just software leveraging better hardware.

It's quite scary how low our standards have gotten.

[1]: https://www.youtube.com/watch?v=pW-SOdj4Kkk

If you'd like to venture further into this rabbit hole I can recommend "Preventing the Collapse of Civilization": https://www.youtube.com/watch?v=pW-SOdj4Kkk&list=LL6MdPYF0rD...

And "The Thirty Million Line Problem": https://www.youtube.com/watch?v=kZRE7HIO3vk

May 20, 2019 · 9 points, 2 comments · submitted by btrask
partlyFluked
Is the introduction of WASM a sort of compatibility layer combining this tree of dependencies?

In the sense that all users are again able to share/create code/'apps' that are compatible on all architectures and software stacks. I can imagine a future where the OS is just the snappiest way to render a browser and nothing else. (Although my experience with the chromebook blunts this desire a great deal.)

naikrovek
Once in a while I will try to convince co-workers about the idea in this video, and every time I am viewed as an ignoramus. I mean, OBVIOUSLY moving all code to web platforms is a good idea, right? How dare I view things differently?

I think efforts to convey this idea outside of a lecture are probably forfeit. People are just far too self serving and contrarian to believe in the legitimacy of an overall trend if any tiny, limited, counter examples exist, and they will stop listening if they have an opportunity to respond.

May 19, 2019 · 4 points, 2 comments · submitted by davemp
detaro
previous large discussion: https://news.ycombinator.com/item?id=19945452
leshokunin
For those who aren’t sure they want to go through the video: behind the somewhat arrogant title, you’ll find a compelling talk about why software engineering has declined and how it’s affecting society.
May 18, 2019 · 231 points, 116 comments · submitted by dmit
austincheney
As a JavaScript developer I strongly resonate with the quote at 14:50 into the video. In summary all of the silicon industrys' chips at the time were full of defects and often the same defects between various vendors. The industry was completely aware of this. The problem is that the original generation of chips were designed by old guys who figured it out. The current generation of chips (at that time) was designed by youngsters working in the shadow of the prior generation and not asking the right questions because they were not aware of what those questions were.

A decade ago JavaScript developers had little or no trouble working cross browser, writing small applications, and churning out results that work reasonably well very quickly. It isn't that cross browser compatibility had been solved, far from it, but that you simply worked to the problem directly and this was part of regular testing.

That older generation did not have the benefit of helpful abstractions like jQuery or React. They had to know what the APIs were and they worked to them directly. The biggest problem with this is that there weren't many people who could do this work well. Then shortly after the helpful abstractions appeared and suddenly there was an explosion of competent enough developers, but many of these good enough developers did not and cannot work without their favorite abstractions. These abstractions impose a performance penalty, increase product size, impose additional maintenance concerns, and complicate requirements.

The ability to work below the abstractions is quickly becoming lost knowledge. Many commercial websites load slower now than they did 20 years ago despite radical increases in connection speeds. To the point of the video this loss of knowledge is not static and results in degrading quality over time that is acceptable to later generations of developers who don't know the proper questions to ask.

shams93
Well I came from the pre jQuery era but the whole industry is obsessed with react. There are tons of tools available web components are becoming standard, ultimately using native apis is far more performent but it's also more standard and consistent and easier than the old days to write pure vanilla js.
austincheney
I have heard people say this in justification of their framework, library, abstraction, whatever. The reality is that, from my experience, I can easily replace maybe 8 other JavaScript developers.

I am not saying that out of arrogance or some uniformed guess at my radical superiority. I am saying this out of experience. It isn't because I am smart or a strong programmer. This has proven true for me, because when writing vanilla JS I am the bottleneck and the delay. I am not waiting on tools, performance delays, code complexity, or anything else. If there is bad code its because I put it there and I am at fault, and so I have nothing to blame but myself. Knowing this, and I mean as a non-cognitive emotional certainty, means I can solve the problem immediately with the shortest effort necessary or it isn't getting solved ever and that changes my priority of effort. It also means not tolerating a slow shitty product since you are fully in control (at fault).

When people go through project management training I tell them there are only two kinds of problems: natural disasters and bad human decisions. When you can remove blame from the equation the distance between you and the problem/solution becomes immensely shorter.

chii
> not asking the right questions because they were not aware of what those questions were.

to me, this sounds like the NIH syndrome, and that those who are tasked with creating new stuff is either lacking in comprehensive education, as well as the "old guards" not transmitting the knowledge in a more permanent form (like a book).

> many of these good enough developers did not and cannot work without their favorite abstractions

i would argue that those were not "good enough", but is "barely know enough". My mantra for using a library is - if you could've written the library yourself, then use it. Otherwise, you dont know enough and using it as a blackbox is certainly going to lead to disaster in the future (for you or some other poor soul).

noir_lord
> My mantra for using a library is - if you could've written the library yourself, then use it. Otherwise, you dont know enough and using it as a blackbox is certainly going to lead to disaster in the future (for you or some other poor soul).

That is largely my attitude to using any package/library - if the entire dev team gets hit by a bus tomorrow can I maintain this (i.e. keep it working reliably in production), if the answer is no then I nearly always avoid it and if I can't I wrap it in my own layer so that I can replace it later.

I've been an enterprise developer for a long time so my worldview is shaped by "this will likely stick around twice as long as anyone expects minimum" though.

Filligree
I have a friend with that attitude. It's led him to implementing his own encryption libraries, UI libraries and so forth, and...

Well, I have to admit he's smart, but the software he writes outside of work is some of the worst I've ever used. Furthermore, I was able to crack his RSA implementation with a straightforward timing attack.

Some things shouldn't be reimplemented.

Perhaps he could have implemented them, then used something else? True, but that's a hard sell.

LandR
First rule of crypto, never roll your own crypto!
detaro
Surely you can at least estimate if you could have created/could maintain a library without actually doing a reimplementation, e.g. by diving into a bug or two?
jaabe
There is too much copy pasting in web-development. I mean, we still use the MVC pattern to run almost all our web-apps because the MVVM pattern would require us to hire additional developers to do the same thing.

Between Ajax and JQuery we can build things that are perfectly reactive and come with the added benefit of being extremely easy to debug. They can’t run offline, and if we need to build something that does, then we’ll turn to something like angular/react/ (typically Vue, because we actually use Vue components in our MVC apps from time to time, but which one is beside the point).

When we interview junior developers about this, they often think we’re crazy for not having access to NPM, but we’re the public sector, we need to know what every piece of our application does. That means it’s typically a lot easier to write our own things rather than to rely on third party packages.

icebraining
Can you explain on how MVVM requires more developers vis-a-vis MVC? And why would jQuery not work offline?
jaabe
It’s several times more complicated to build business logic on both the front and the backend.

It’s several times more complicated to debug something that runs on the clients device than something that runs on your server.

arendtio
Actually, I don't think that abstractions are the problem. I mean, the whole OSI model is made of abstractions. Abstractions are at the core of software development.

And I also don't think that Jquery is the problem. Jquery just made JS worth learning. Before you had to spend an insane amount of time just working out implementation specifics that changed every few months.

However, the point where I do agree with you is that we have a performance issue with JS. And I am not talking about slow JS engines. I am talking about developers who are not aware of how costly some operations are (e.g. loading a bunch of libraries). Yes, that is an issue that naturally arises with abstractions, but to conclude that abstractions themselves are the problem is wrong.

I think the problem is more about being aware of what happens in the background. You don't have to know every step for every browser and API, but loading 500KB of dependencies before even starting your own scripts is not going to be fast in any browser.

PavlovsCat
> However, the point where I do agree with you is that we have a performance issue with JS. And I am not talking about slow JS engines. I am talking about developers who are not aware of how costly some operations are (e.g. loading a bunch of libraries). Yes, that is an issue that naturally arises with abstractions, but to conclude that abstractions themselves are the problem is wrong.

I could not agree more.

https://jackmott.github.io/programming/2016/07/22/making-obv...

Notice "Javascript map reduce (node.js) 10,000ms" vs "Javascript imperative (node.js) 37 milliseconds"

I've seen the map reduce way defended as being "more readable and maintanable", with plenty of agreement to it. When I contested it, mental gymnastics ensued and did not let up. Nobody dislikes performance, not really, but I think some don't like reflecting on how they arrived at their opinions. That's the bit they're really invested in.

And in general, "what does this abstraction stand for?" is a very dangerous question, if you ask it about computer stuff, you might also ask it about other things, and there more groups that don't like that than there are people in the world. Not to make this too political, but I think the pressure against thinking for yourself is way, way bigger bigger than the demand for performance. Just think of Ignaz Semmelweis.

Another issue is that many things are hard for us to visualize, as Grace Hopper explains: https://www.youtube.com/watch?v=JEpsKnWZrJ8

mattmanser
In the jQuery era (Firefox release and IE6-10ish) they didn't change ever few months, in fact they never changed and didn't for about a decade, which was part of the problem.

Working with the dom in JS wasn't hard, it was just time consuming and required lots of boiler plate code.

JQuery made it quick and easy and cross-browser even if FF was still a small %.

JavaScript itself didn't change AT ALL for years before and after jQuery came out, so I have no idea what you are talking about. Plus HTML 5 was years after jQuery, with html 4 released in the 90s, you've got the wrong recollection of history.

The lack of change was part of what made jQuery so ubiquitous.

arendtio
I wonder a bit why you write 'IE6-10ish' as IE7 was a big change already (not talking about IE8 or IE9, or what the other vendors did during that period). So when jquery was released, we had the split between standards and IE(6) compliant implementations already and the whole browser development started to get traction again.

So yes, the standards didn't really change during that period, but the real world implementations did. And jquery gave you a way to learn just one thing and don't care about what all those browser vendors were doing.

mattmanser
When IE7 was released absolutely no changes to JavaScript or HTML happened, IE just became slightly more standards compliant.

AFAIK the big thing in IE7 was that it had tabbed browsing, like FF. And slightly improved js performance, that V8 put to shame a couple of years later.

The actual split between IE6/7 and FF was mainly in the Ajax syntax, not the dom, etc.

My impression from what you're saying is that you didn't program js in the 2000s, did you? I did.

arendtio
As a matter of fact, I did. But obviously, it seems that we didn't experience the events in the same way and I don't have the impression that you are even trying to understand what I am writing.
darepublic
With libraries like react static that prebuild the dom of your page you don't have to worry about that
austincheney
The abstractions aren't making this problem the developers without a willingness to work under the abstractions are.

I wrote this following tool in less than 90 minutes 5 years ago because I had some time left over before presentations at a company hack-a-thon. I updated it recently at about another 90 minutes of time.

https://github.com/prettydiff/semanticText

That tool was trivial to write and maintain. It has real utility value as an accessibility tool. I could write that tool because I am familiar with the DOM, the layer underneath. Many developers are not even aware of the problems (SEO and accessibility) this tool identifies much less how to write it. jQuery won't get you the necessary functionality.

mercer
Why are so many of your comments of a gate-keeping, self-congratulatory nature, or blanket criticisms of others?

I find it frustrating because I do think you have plenty of valuable stuff to say, but you're consistently presenting it in a rather unappealing package and I don't really understand what you get out of that.

austincheney
Gate-keeping yes, self-congratulatory no. This subject is personally sensitive to me because it has immediate real world consequences that impact my livelihood and choice of employment.

I am away from home on a military deployment at this time and I am constantly thinking about what I should do when I return to the real world. I am actively investigating career alternatives by dumping software development, because I honestly believe my career is limited by the general unwillingness of my development peers to address actual engineering concerns out of convenience and insecurity.

jstewartmobile
I was with him until the middle.

Lack of inter-generational knowledge transfer doesn't cut it. Most of the people who rolled this stuff are still alive. And as for the whipper-snappers: people don't get very far writing programming languages/video games/operating systems without knowing their stuff.

The real boogeyman is feature combinatorics. When making a tightly-integrated product (which people tend to expect these days), adding "just" one new feature (when you already have 100 of them) means touching several (if not all 100) things.

Take OpenBSD for example: When you have a volunteer project by nerds for nerds, prioritizing getting it right (over having the fastest benchmark or feature-parity with X) is still manageable.

Bring that into a market scenario (where buyers have a vague to non-existent understanding of what they're even buying), and we get what we get. Software companies live and die by benchmark and feature parity, and as long as it crashes and frustrates less than the other guy's product, the cash will keep coming in.

TeMPOraL
> When making a tightly-integrated product (which people tend to expect these days)

Do they? It was my impression that the recent evolution of user-facing software (i.e. the web, mostly) was about less integation due to reduced scope and capabilities of any single piece of software.

> adding "just" one new feature (when you already have 100 of them) means touching several (if not all 100) things.

This sounds true on first impression, but I'm not sure how true it really is. Consider that I could start rewriting this as "adding 'just' one new program when you already have 100 of them installed on your computer"... and it doesn't make sense anymore. A feature to a program is like a program to OS, and yet most software doesn't involve extensive use, or changes, of the operating system.

The most complex and feature-packed software I've seen (e.g. 3D modelling tools, Emacs, or hell, Windows or Linux) doesn't trigger combinatorial explosion; every new feature is developed almost in isolation from all others, and yet tight integration is achieved.

jstewartmobile
Pretty sure making emacs render smoothly in 2016 was not an isolated change--even if the code change were only a single line.

https://www.facebook.com/notes/daniel-colascione/buttery-smo...

Same story for speeding up WSL.

https://devblogs.microsoft.com/commandline/announcing-wsl-2/

Or, just think about your phone. If I put my head to the speaker, a sensor detects that, and the OS turns off the screen to save power. If I'm playing music to my Bluetooth speaker, and a call comes in, it pauses the song. When the call ends, the song automatically resumes.

KT's UNIX 0.1 didn't do audio or power management or high-level events notification.

TeMPOraL
> Pretty sure making emacs render smoothly in 2016 was not an isolated change--even if the code change were only a single line.

This was a corner case. What I meant is the couple dozen packages I have in my Emacs that are well-interoperating but otherwise independent, and can be updated independently.

> Or, just think about your phone. If I put my head to the speaker, a sensor detects that, and the OS turns off the screen to save power. If I'm playing music to my Bluetooth speaker, and a call comes in, it pauses the song. When the call ends, the song automatically resumes.

These each affect a small fraction of code that's running on your phone. Neither of them is e.g. concerned with screen colors/color effects like night mode, or with phone orientation, or with notifications, or countless other things that run on your phone in near-complete independence.

jstewartmobile
Buttery smooth emacs is not a corner case. When working on API-level things (for those who don't know--emacs is practically an operating system), if we care about not breaking things, one must be highly cognizant of all the ways that API is consumed. Even if the final change ends up being very small code-wise, the head-space required to make it is immense. That is why we have (relatively) so few people who make operating systems and compilers that are worth a damn.

Most emacs plug-ins aren't operating at that level of interdependence. This one is working on a text buffer at t0, and that other one is working on a text buffer at t1. Of course, the whole thing can be vastly simplified if we can reduce interdependence, but that is not the way the world works. Typical end user doesn't want emacs. Typical end user wants MS Word.

Even if I accepted the replacement of my word "feature" with your word "program" (not that I do), one only needs to look at Docker's prevalence to see the point still holds. Interdependence is hard, and sometimes unavoidable.

qwsxyh
> Consider that I could start rewriting this as "adding 'just' one new program when you already have 100 of them installed on your computer"... and it doesn't make sense anymore.

Turns out when you chnage the words of a statement, it changes the meaning of that statement.

z3t4
Programs can often be extended with new features using the plugin pattern.
pixl97
And plugins can include logic that reacts in completely unexpected ways with existing plugins. Testing all possible scenarios can be near impossible.
majkinetor
And this is actually more rule then exception - once you have more then 2 plugins, the chance of plugins colliding or not allowing updates of the main software in the future are more or less norm.

Plugin minimization is mandatory IMO.

z3t4
Plugins should not depend on other plugins. Sometimes you have to move functionality to core.
majkinetor
Even if so, they might introduce different dynamic on the same core feature.

Also, core plugin system evolves and majority of the time does not support "older" plugins.

I had this problem in basically any software I used with plugins be it Winamp, Redmine, FL Studio or whatever.

z3t4
Not an issue if there are no third party plugins. Its however hard to resist allowing third party plugins when you already have the architecture. Also hard to resist feature bloat when adding new features are seemingly free.
majkinetor
If there are no third party plugins, then you don't have plugins - its an internal architectural decision, not relevant for the end-users.

Having plugins means anybody should be able to create one.

I remember vagrant has support for all historic plugins versions no matter the current API version. This is rare goodness but prevents only one type of problem - inability to update the core.

std_throwaway
> Lack of inter-generational knowledge transfer doesn't cut it. Most of the people who rolled this stuff are still alive.

I think he means generations in terms of the workplace/politics where you can have a generation change every few years. Meaning that most old guys go and new guys come. Technically you could ask the old guys because most are still alive but it doesn't happen for a lot of different reasons.

houseinthewoods
heh thx for editing out the weird cynical part ;)

moved from phone to computer to argue but it was too late

jstewartmobile
My bad
microcolonel
I tend to agree that OpenBSD is hitting the spot a lot better, but the problem I have is that there's not enough momentum that it keeps up with hardware releases. They had been maintaining kernel drivers for AMD GPUs for a while, but it seems they stopped updating regularly. I now own no hardware from the last decade that OpenBSD can get accelerated graphics on, and I need accelerated graphics to power the displays that allow me to be productive (by showing me enough information at once that I can understand what I'm doing).

I was having a conversation with somebody the other day about a privacy concern they were addressing, where a company was offering to monitor cell signals for some retail analytics purpose; and it was genuinely surprising to them that mobile phones broadcast and otherwise leak information that can be used to fingerprint the device. I think it's rather shocking the amount of ignorance people allow themselves to have when it comes to things like this. Furthermore, the way she was talking about it, it seems she thought it was the responsibility of basically anyone but the owners of these devices to consider things like this, or even ask the questions that would tell you something like this exists.

jmiskovic
Great talk. I agree that sw is on the decline. You can see it in your OS, on the web, everywhere. Robust products are replaced with 'modern' crappy redesigns. We are surprised if the thing still works after 5 years.

I don't agree on his conclusions. The real source of problem is that now we have maybe x100000 more software than we had in 70s. It's that many more programmers, so not just the 1% smartest greybeards as before. We need more abstractions, and yes, they will run slower and have their issues.

Also, not everybody is sitting at the top of hierarchy of abstractions. Some roll up their sleeves and work on JIT runtimes and breakthrough DB algorithms.

All those blocks of software need to communicate with the platform and between them. IMO the way out is open source. Open platforms, open standards, open policies. Every time I found a good piece of code in company's huge codebase, it was open source library. Every time. You have to open up to external world to produce well engineered piece of software. The lack of financial models for open source is the obstacle. We should work on making simple and robust software profitable.

josephg
I'm consistently confused how we manage to run so many more lines of code, yet our software doesn't really do anything it didn't do in the late 90s. Back then I chatted over IRC, browsed the web and played video games. Now I chat over whatsapp, browse the web and play indie video games. In 2009 Chrome had 1.3M SLOC. Today it has 25M. And that number has been going up linearly - since 2016, not including comments, chrome has added 2.1M SLOC per year. Thats another entire 2009 google chrome web browser in code added to chrome every 8 months. Can you name a single feature added to chrome in the last 8 months? I can't. As Blow says, productivity (measured in features per LOC) has been trending toward 0 for a long time. What a tragic waste of google's fantastic engineering talent.

I pick on Chrome because the data is available. And because I regretfully have about 8 copies that code on my computer. But I bet we'd see the same curve with lots of modern software. The LOC numbers for microsoft windows have become so large that I can't really comprehend how so many programatic structures can do so little.

I once heard this architecture pattern referred to as a pile of rocks. Piles of rocks are really simple and elegant - you can always add features to your pile of rocks. Just add rocks on the top until its tall enough! Piles of rocks are really easy to debug too. Just shake the pile (unit tests), and when anything collapses, add rocks until the hole is filled in (= patch that specific issue). Then rinse and repeat. You don't need to bother with modelling or proofs or any of that stuff when working on a pile of rocks.

Look at those Haskell programmers over there building aqueducts using archways. Peh. they should get jobs writing real programs.

Ahem.

[1] https://www.openhub.net/p/chrome/analyses/latest/languages_s...

eafkuor
How is it a great talk? He makes a lot of statements without backing them up at all
jmiskovic
He voices his opinion clearly and holds audience attention well. He provides quite a few interesting examples of lost knowledge and civilization decline. He goes against common assumption that tech is ever-evolving and shows how and why the software infrastructure is failing under our feet.

The day after I watched the talk I was compelled to watch the mentioned Apollo 11 documentary. For me this was a great talk, but feel free to disagree.

eafkuor
I don't disagree with you, actually. I was fixating on his unproven statements and I couldn't see past that. Can't really argue with you.

Was the documentary "When We Left Earth"?

jmiskovic
It's called "Apollo 11" released by Neon studio this year. It's historical footage of the whole trip, without any commentaries or interviews. Well put together, immersive and epic. Doesn't go into much science or engineering beyond showing the massive scale of rocket and launching platform.
imiric
> The real source of problem is that now we have maybe x100000 more software than we had in 70s. It's that many more programmers, so not just the 1% smartest greybeards as before.

The greybeards from the 70s weren't much smarter than today's programmers. They were the same curious hackers from today, with the advantage of being born in the right place at the right time, when the technology was still developing, so they were forced to build their own tools and operating systems.

> We need more abstractions, and yes, they will run slower and have their issues.

I disagree, and side with Jon Blow on this: abstractions (if done well) create the illusion of simplicity and more often that not hide the apparent complexity of lower levels. Sometimes this complexity is indeed too difficult to work with, but often it's the problem itself that needs to be simplified instead of creating an abstraction layer on top.

I think as an industry we've failed to make meaningful abstractions while educating new programmers on the lower level functionality. A lot of today's programmers learned on Python, PHP, Ruby, JavaScript, etc., which are incredibly complex tools by themselves. And only a minority of those will end up going back and really learning the fundamentals in the same way hackers in the 60s and 70s did.

> IMO the way out is open source. Open platforms, open standards, open policies.

Agreed. But education and simplification are also crucial.

jmiskovic
The argument wasn't that people were smarter back in 70s. Instead, computers were rare, only the most motivated individuals could reach them. They were curious hackers as you say, and today's curious hackers are just as good. They form maybe 1% of programmer population.

Regarding abstractions, I feel you are talking about leaky abstractions - systems that offer a simplified interface but still manage to burden you with all the implementation details. It's hard to identify such poorly engineered building blocks until it's too late. Thus layers get built on top of them and it becomes too costly to go back and rework the stack. This is a problem with development processes favoring new features over paying the accumulated tech debt.

Still, good building blocks can exist. You can (and often must) have complexity if it's properly isolated and does not leak. I'd say JVM is great example of abstraction that is slower and internally more complex than SW done native, but brings much to the table as a platform to build on. Other examples: BeamVM, ZeroMQ, Lua (leaky, but at least very simple). Browsers unfortunately are too burdened with legacy and security issues.

I feel my formal education has failed to teach me how to design proper interfaces between systems. Instead we are taught pointer arithmetic and mainstream OOP ("cat IS an animal").

roenxi
The risk is that things 'just work' for extended periods of time and the maintainers are optimised out of the system because they aren't needed in the short term.

My personal guess at why civilisations can collapse so slowly (100s of years for the Romans, for example) is that the people who maintain the political systems do too good a job, and so the safeguards are forgotten.

For example, after WWII the Europeans learned some really scary lessons about privacy. The Americans enjoyed greater peace and stability, so the people with privacy concerns are given less air time in places like Silicon Valley or Washington. The two-step process at work here is that when things are working, standards slip and the proper response to problems are forgotten. Then when things don't work, people don't know what to do and the system degrades.

nabla9
Sean Carroll's podcast Episode 37: Edward Watts on the End of the Roman Republic and Lessons for Democracy has good discussion about this (there is also transcript) https://www.preposterousuniverse.com/podcast/2019/03/11/epis...

Basically there are norms and unwritten understanding, deeply understood ideas about what is not acceptable. Rulers don't push their power to full extent. Then someone comes along and starts to push them and gradually what is acceptable changes.

d_burfoot
I loved this talk, and in particular the point about programmers being forced to learn trivia instead of deep knowledge. I just started a new job at a big tech company, and I've spent a whole week so far trying to figure out how to use the build tool. The frustrating part is that most of the software modules my team is working on aren't very complicated. The complexity comes from pulling in all sorts of 3rd party libraries and managing their transitive dependencies.
js8
It seems to me that there is a cultural problem that deep expertise is not valued, because it is difficult to understand and to get somebody "flexible" is easier.

I was just at a workshop about https://en.wikipedia.org/wiki/Design_thinking. The whole premise was that you don't actually need to hire an (expensive, inflexible) expert, who understands how something is done, but rather what you need to do is to "observe" an expert.

But imagine what happens when everybody does that! Everybody gets rid of their experts, assuming that the client (who they are supposed to provide the service for) has the actual expertise. And they are assuming the same about their clients and so on. The end result is complete disregard for expertise.

So expertise is a positive externality, in an economic sense. Nobody is incentivized to keep it more than neccessary. This leads to losses over time.

pixl97
This is very common in industry with boom/bust cycles. Lots of experts in the boom, they leave when the bust comes, then when the next boom comes there are lots of problems expanding said processes quickly because of lack of expertise.
glandium
He briefly mentions the Boeing 737 MAX issue, but he understates the problem. Sure there was a software problem, but the urderlying issue was the whole notion that everything can be "fixed" (worked around) by software. That it's fine to make changes to the plane aerodynamics and compensate with software so that it would seemingly act like the previous model.
paulkon
I'm reminded of this explanation that was posted on HN couple months back https://twitter.com/trevorsumner/status/1106934362531155974

Seems like it was a physical engineering and business problem and software was added to (inadequately) compensate.

magicbuzz
He obviously uses Windows for his anecdotal examples, but I don’t think you can point the finger at Windows specifically. I think it’s consistent throughout OSes. I see regressions in iOS since the earlier versions, as well as in Linux and the applications I use.

My IDE stopped providing menus. It’s open source so I just shrug and track the issue in Github.

reactspa
Fantastic talk.

Portable Apps on Windows are a hedge against some of the angst he describes. (E.g. the part about updates changing a lot of things around or causing failure-to-launch problems).

E.g., I still use WinAmp to play mp3's. It's a portable version, doesn't need installation (so I can use it on my locked-down work computer). The UI hasn't changed in 20 years. Newer file-formats can be played after adding plug-ins.

I've put together a whole bunch of Portable Apps, and nowadays I first try to find a portable version of an app I need before a non-portable version.

stallmanite
100% concur re: portable apps. It’s a better way to live from an end-user standpoint in my experience. Portableapps.com and librekey both are excellent
arketyp
The repeal of Moore's law will be a blessing in disguise, I think. Not only will programmers need to get clever in a traditional sense but, also, a new era of specialized hardware will require a more intimate understanding of the bits and pieces. I'm optimistic.
RachelF
Compared to the 1990's Moore's law stopped really making CPUs faster around 10 years ago. We now have more cores, but getting performance gains from parallelism can be hard.

I think the real reason software may be more buggy is that good quality software takes too long to develop. There is a big commercial advantage in being first to market.

Constant updates over the web also mean that you don't need to test so much before shipping. You can always send a patch later. This was different when software was shipped on disks.

philwelch
I’m actually looking forward to the end of Moore’s Law. Instead of keeping up with a rapidly shifting landscape of new and creative ways of wasting CPU cycles, we can maybe build things that have a chance of lasting a hundred years.
pixl97
Even after Moore's we are going to have a shifting landscape of security changes in processors that has been neglected for some time.
lalalandland
The general purposeness of computers is the reason for complexity. We use the same systems to do highly secure and critical business transaction as high performance simulation, playing and fun. The conveniency of not having to switch systems when doing different tasks is adding a lot of complexity. Special purpose hardware can by its nature of not being general, be much simpler and omit a lot of the security and complexity. But it's much less convenient and much less flexible.
std_throwaway
At my workplace when I ask about why some process parameters are this way it usually leads to a dead end where the people who know are long gone and those who should know don't know the essentials. Everything is kind of interconnected and errors show up months later so you can't really change anything on any machine until the machine as a whole breaks and needs to be replaced. Then you try to get it to work somehow and those parameters are then set forever.
cryptica
I think it's because companies always try to commoditize software developers but it doesn't work. You can't replace a good software developer with 10 mediocre ones plus thousands of unit tests. The only way to become a good software developer is with experience.
adverbly
I remember seeing the foundationDB distributed systems testing video for the first time, and being blown away with what it takes to build robust software. Worth a view if you haven't seen it. https://www.youtube.com/watch?v=4fFDFbi3toc

Would love to see more things in this direction, but, I agree that the market doesn't want it. Most users will gladly accept an infrequent bug for an earlier release, or lower cost version of a product.

new4thaccount
Reducing all this complexity is partly why I'm hoping the Red Language project can succeed where Rebol failed.

Of course you can't do everything, but a good full-stack language could cover perhaps 80% of software needs using well written DSLs. The simple fact that we have so many languages targeting the same thing is a waste and duplicative effort (Java, C#, Kotlin, Scala, Clojure, F#...etc) for business apps and (Python, Matlab, Julia, R, and Fortran) for data science and scientific programming. Also systems languages like (C, C++, Ada, Rust).

On one side it is good to have purpose built languages, but on another it puts a big barrier to entry.

Note that I'm advocating for abstractions, but far fewer languages. Yes, abstractions add complexity, but actually make the code more readable. I shudder to think of humanity having to maintain and support ever increasing levels of software.

d_burfoot
I absolutely agree that we actually need fewer languages. The languages we have today really are good enough for the vast majority of programming work. To the extent that they fall short, the solution is to either improve the language or to build good libraries for it.
benc666
Many programming languages start out as 'experiments' by their creators, to combine or extend the capabilities and attributes of some prior ones.

That seems like a very healthy evolutionary approach to me.

cheschire
This whole presentation feels like a great candidate for a meta analysis on the effects of recency bias on analysis.
earenndil
I challenge his assertion around 32:50 that something is lost. I've done assembly programming. C programming. I might venture to say I'm pretty good at it. I even dabbled a bit in baremetal programming, was going to make my own OS, but lost interest. Wanna know why? Take a look at this[1] article. Yep. If, on x86, you want to know what memory you're allowed to access; how much of it and where it is, there is literally no good, or standard way to do that. "Well," you (or jon blow) might say, "just use grub (or another multiboot bootloader), it'll give you the memory map." But wait, wasn't that what we were trying to avoid? If you do this, you'll say "I'm smart, I'm sparing myself the effort," but really there is a loss of capability, you don't really know where these BIOS calls are going, what the inner workings of this bootloader are, and something is lost there.

This is a bit of a contrived and exaggerated example, but it serves to prove my point which is that these things really do scale linearly: you give up the same amount you get back by going up a layer of abstraction (in understanding/productivity, not talking about performance yet). Low-level programming languages aren't more productive than high-level programming languages. Low-level programmers are more productive than high-level ones because it takes more discipline to get good at low-level programming so the ones that make it in low-level programming are likely to be more skilled or, at least, to have acquired more skill. Think about the story of mel[2]. Does anyone honestly think, with any kind of conviction, that mel would have been less productive had he programmed in python and not thought about how machine instruction would be loaded?

As I've mentioned, I have done, and gotten reasonably good at, low-level programming, and yet my current favourite language is perl6. A language that is about as far from the cpu as it gets, on a par with javascript or haskell. Why? Because nothing is lost. Nothing is lost, and quite a lot is gained. There are things I can do with perl6 that I cannot do with c—but, of course, the reverse is also true. And I think that jon blow's perspective is rather coloured by his profession—game development—where performance is important and it really does pay, sometimes, to think about how your variables are going in memory. He has had, I'm sure, negative interactions with proponents of dynamic languages, because he sees their arguments as (maybe that's what their arguments are, I don't know) "c is useless, javascript is good enough for everything." Maybe the people who truly think that have lost something, but I do not think that mel, or jon blow, or I, would lose much by using perl6 instead of c where perl6 is sufficient.

1: https://wiki.osdev.org/Detecting_Memory_(x86)

2: http://www.catb.org/~esr/jargon/html/story-of-mel.html

NeveHanter
About the first one, that's also another problem, BIOS doesn't have standard protocol, if there would be such, there would be one standardized way to detect the memory layout.

About the second one, performance should be crucial everywhere, if some application eats all the resources then I can't have other applications working in the background doing their stuff. That's the problem with i.e. "modern" communication apps (I'm talking about you, slack) where my four core CPU is on it's knees when doing simple things like switching the team or even channel, not mentioning starting the app itself. Another one is when I'm on the Google Meet chat, my browser eats 80% of CPU, I can't do anything reliably in that time, running anything makes chat to loose audio, lag a much, etc.

Going back some years ago, I was able to run Skype, AQQ, IDE, Chrome browser and Winamp at the same time on archaic (in today's standards) i3-350M and 4 GiB of RAM.

earenndil
> That's the problem with i.e. "modern" communication apps (I'm talking about you, slack) where my four core CPU is on it's knees when doing simple things like switching the team or even channel, not mentioning starting the app itself. Another one is when I'm on the Google Meet chat, my browser eats 80% of CPU, I can't do anything reliably in that time, running anything makes chat to loose audio, lag a much, etc.

Again, this is a problem with programming design, not programming language. It is very possible to make good, performant programs in fancy dynamic languages, and awful, leaky, slow ones in 'high-performance' compiled languages. The impact of the language itself is really not as high as it's made out to be. Yes, python is 100x slower than c at multiplying numbers, but so what? Your program doesn't spend most of its time multiplying numbers. If you design a python program in a non-stupid way, for an application like a chat app, the performance hit compared to c is negligible.

Ace17
> About the first one, that's also another problem, BIOS doesn't have standard protocol, if there would be such, there would be one standardized way to detect the memory layout.

It's the same for almost all hardware: graphics cards, sound cards, all use their own register map (which, to make the matter worse, is often kept secret). Even USB stuff, which is supposed to be already homogeneized (i.e USB classes) often requires sending vendor-specific "quirk" strings to get the hardware working (e.g the list of 'quirks' in the snd-usbmidi Linux kernel module).

The hardware diversity isn't a detail here. It's the root of the "too many abstractions" problem, and I don't think this is something you can avoid. This is why device drivers exist. This is why operating systems try hard to impose APIs on device drivers (ALSA, Direct3D, etc.) so Firefox, MS Word, Half-Life ... can run on future hardware.

This is one reason why the abstraction layers exist (I'm not even talking about memory protection / safety here) : because you don't want to re-release your app every time some vendors makes a new sound card, graphics card, MIDI interface, wi-fi chip, etc. And you don't want to code the support for all this hardware yourself.

raverbashing
> Take a look at this[1] article. Yep. If, on x86, you want to know what memory you're allowed to access; how much of it and where it is, there is literally no good, or standard way to do that. "Well," you (or jon blow) might say, "just use grub (or another multiboot bootloader), it'll give you the memory map." But wait, wasn't that what we were trying to avoid?

Yeah I think that's part of what he was trying to say

Backwards compatibility and overengineered solutions like SMM or ACPI then UEFI with a confusing standard and an even more confusing landscape where most manufacturers will just write whatever makes Windows XP boot and ship it

potrarch
There is a relationship between quantity of functionality and bugginess. Even with the most demanding testing, bugs will remain. The question is, as more and more software permeates our lives, will the accumulation of unfixable bugs ultimately overwhelm us. Can we build an AI that can clean enough of the bugs out of all of our software, including it's own, for our civilization to survive?
mike00632
It seems like Jonathan Blow completely forgot about the blue screen of death and how common it was.
tomovo
It was common but 99% was badly written 3rd party drivers. I’d say Windows 2000 itself for instance was pretty solid.
kzrdude
Now, what would be amazing would be if we had found the Antikythera mechanism so intact that it could be reconstructed perfectly. And then we'd check everything it could do, and what kinds of drawbacks or errors the construction had!
earenndil
People don't care about five 9s anymore? It's not as important as it was (I assume—I wasn't really around at the time), but cloud providers definitely advertise their number of 9s.
throwaway2048
Good thing they have regular service outages that mysteriously don't quality for SLA (or show up on dashboards)
PorterDuff
I had a few cocktails and thought about a few points made in the video.

It seems to me that there are two different notions here that are being conflated:

. A rotting of knowledge over time.

. A variant of Moore's Law. In this case, the idea that the value of technology, in a particular area, has a decreasing value on the margin.

It's kind of like the notions you see in cliodynamics, that there are a few interacting sine waves (or some other function) in mass human behavior.

I suppose that the main concept of importance is how it all might mess with your own personal situation. Personally, I think that the West is in decline, but that doesn't have a whole lot to do with the quality of software on internet websites.

alexashka
There is no preventing a collapse. Try and enjoy the ride :)
soup10
bravo, great talk Jon. but real talk can you make another Braid already
fallingfrog
Not to pick on Microsoft specifically too much, but I remember seeing the hello world program for windows 3.1 for the first time and thinking, “this is not looking good.” And I was right.
lolc
Wow he has a rosy picture of the past. I don't see where he gets the five nines from. He doesn't even quote anybody on it. Most of the examples he gives would have been zero nines back in the day because they were not available at all!

Wikipedia for example has one nine availability in my life. Because when I sleep my phone is still on.

Illniyar
At what point in the past were our programs stable, robust and just worked? Perhaps it was before my time, but DOS and windows (3.11, 95) would crash, constantly. Blue screens of death, infinite loops that would just freeze my computer, memory leaks that would cause my computer to stop working after a day.

I now expect my computer to stay on for months without issues. I expect to be able to put it to sleep and open it in the same state it was. I expect that if a website or a program errors, my OS simply shrugs and closes it. I expect my OS to be robust enough so that if I enter a usb or download a file I'm not playing Russian roulette that it might contain a virus that would destroy my computer.

In the past I would close my computer at the end of every day because otherwise it will simply crash some time in the night. I would run de fragmentation at least once a month. Memory errors and disk errors were common, and the OS had no idea how to overcome it. Crashes were so common, you just shrugged and learned to save often.

NeedMoreTea
Sometime in the early or mid 90s I had a FreeBSD box on 2.something - installed because I disliked unreliable flaky Windows so much - that passed a year uptime. It was daily driver during that time, and often doing stuff while I was out at work or sleeping. It was cutting CDs, that had been simply bullet proof on Amiga, becoming so incredibly delicate and flaky on Windows that was one of the pushes to go BSD instead. I mostly kept on using the Amiga as most reliable option for that.

The early 90's Sun's and SGI's didn't crash much either - though in a dev shop, sure we could push them to panic from time to time. The bigger iron just ran indefinitely, often until OS upgrade. :)

Now obviously this talk is game related, but even my previous Amigas were more reliable for uptime if you stayed within Workbench - often passing into months - than DOS and Windows. The mostly undeserved reputation of Amiga for constant crashing was from games hitting the hardware direct and those guru messages instead of silent freeze or pretty random colours that other platforms gave.

All were online, though not much web yet - mainly ftp, newsgroups and dial up BBS's.

AnonymousPlanet
You are aware that your cheap little box running DOS or Windows was not the epitome of computing back then? It was a wobbly, underdeveloped side branch dominated by amateurs doing amateur things on operating systems made by amateurs. The professional computing was done on UNIX and VAX. And both were rock solid in comparison.
JPLeRouzic
In my experience DOS but also all Windows from 1.2 to 95 never experienced crashes. For me it started with Windows 98 (heavily pirated by people) and the horror story was Windows me. My wife told me to do something and as there were versions of Windows 2000 provided for free in magazines, I used one. Windows 2000 was such a relief from Windows me! But there was no USB and other niceties in W2K.

The nightmare started again with Windows XP, then I switched to Ubuntu which was reminiscent of Windows 2000.

A funny thing and proof of solid interfaces in Windows 3.1/3.11 and Windows, is that people were making their own versions by removing/adding components and sometimes even changing their content with hexadecimal editors.

There is still a fandom for old Windows versions out there.

And you could catch viruses literally by hand by looking in kernel files, checking their size, and checking what was loaded in memory.

I remember that time with great pleasure.

moystard
Windows 95 itself was relatively stable but the drivers were globally not. Your experience variend depending on the hardware you were running, and the stability of the associated drivers.
vardump
Windows 95 stable? Never ran out of GDI handles and had Win95/98 crashing? Yeah right...
rvanmil
> learned to save often

I still have the habit of constantly hitting cmd-s everywhere, it’s a reflex I’ll probably never unlearn. I also cringe when I see people working on a bunch of files which have not been saved for a while or, god forbid, not at all. Completely irrational but it’s what I’ve been programmed to do for years ;)

ken
I once read that "Graphing Calculator" on the Mac was so reliable they'd run it in a test loop, for days on end, as a hardware check.

Today, I can reliably crash my Mac (10.13.x) by switching Spaces twice in a row quickly.

slacka
The problems you experienced were not with DOS, but with Windows 3.11 / 95. DOS itself was one of the most stable platforms I've ever worked with. I personally worked on a NetWare Server running on DOS that had an uptime of over 20 years. DOS's stability was not an outlier. Many of the UNIX machines I worked that predated DOS had uptimes that all measured in months and years.

The only reason why Windows was so buggy for you is that you were using the home editions. At the same time that you were experiencing blue screens in Win 9x. My NT workstation was rock solid without any of the issues you described.

ken
I hear this claim occasionally but it doesn't match my experience. The very first time I used Windows NT 4 (probably 1997 or 1998), I couldn't figure out how to log out, so I chose Start -> Help to look it up. Bluescreen.

In the subsequent months/years with NT 4, the situation did not improve. It was a sad day when they replaced the HP/UX section of the lab with more NT machines. They were faster but they crashed a lot. It really took until Vista before NT was reliable.

slacka
> windows (3.11, 95) would crash,

>> At the same time that you were experiencing blue screens

>>> first time I used Windows NT 4

For the 3.11/95 period, I was talking about NT 3.51 not 4.

Yes, NT 4 had well known issues with poor quality graphics drivers. For this reason many of us stayed with 3.51 until the graphic driver problems were ironed out.

You can cherry pick unstable OS's from any time period. But there is nothing special about today's OS's or programs. I've seen DOS, multiple forms of UNIX in the 80's and 90s that are just as stable as today's Win 10 or OS X.

jacques_chester
Windows NT was originally a microkernel architecture. NT4 moved a bunch of code back into the kernel space for performance reasons.

Most notably: graphics and printer drivers, which are not typically written the highest standard.

Big iron vendors don't really have that problem, since they typically control their hardware as well. Microsoft had to rely on component vendors to provide driver software and couldn't plausibly test all permutations under all conditions (even though they test very, very many).

scotchmi_st
This is exactly how I felt watching the video. It seems just like the same old 'wasn't everything better in the old days' nonsense. Not to mention the fact that in those days, a computer was something that you had in one room in your house, and wasn't often connected to other computers. Nowadays, computers are everywhere. I personally would argue that the rate of increase in safety in code hasn't kept pace with the rate of code being put in things, but that's a whole different kettle of fish.
kgwxd
He's a game developer. Games used to be released on hardware, with no possibilty of being patched.
starchild_3001
Well said! Windows + associated app software, drivers, viruses back then used to be the biggest source of problems. Today I have none of that with Mac, iOS or Android. Having no reset for days or months is normal and expected.
sdfjkl
> At what point in the past were our programs stable, robust and just worked?

DOS was rock solid, at least around the era of DR-DOS. DESQView 386 was absolutely stable too. The BBS software I ran on them in those days was a wobbly piece of shit though.

I also recall Borland's Turbo Pascal compiler and the accompanying text-mode IDE being ultra reliable.

After DOS I've used OS/2 which was also extremely stable, although suffered from limited hard- and software availability.

Mac OS X used to be rock solid too in the heydays of the Powerbooks and earlier Intel Macbooks. Every now and then there were hardware design flaws though, and now the quality of both soft- and hardware seems to have taken a tragic turn for the worse.

You still do play Russian roulette whenever you plug a USB device into your computer, see "USB Rubber Ducky".

siberianbear
> DOS was rock solid

Boy, that's not how I remember DOS. I remember playing with all kinds of variants of driver load order in config.sys and passing obscure arguments into himem.sys to avoid odd hardware conflicts and crashes.

civility
I always wondered what the world would be like if Microsoft had just made a 32 bit DOS instead of going down the WinNT/95 route. Most of the headache in config.sys and friends was because you were working around the 16 bit address space. However, there was something really nice about owning your entire machine and only needing command.com for the "operating system". Compare this to full operating systems which consume gigabytes of disk and memory.

This hypothetical 32 bit DOS could've had memory protection and multitasking too. Obviously device drivers would add complexity, but it doesn't need to be as complex as it's become.

nitrogen
There was a company that made a multi-user 32-bit DOS called TSX-32.

https://en.m.wikipedia.org/wiki/TSX-32

nickpsecurity
Probably FreeDOS with third-party GUI's or desktop distros like N2K-OS:

http://wiki.freedos.org/wiki/index.php/Main_Page

https://www.bttr-software.de/freesoft/desktops.htm

https://sourceforge.net/projects/n2kos/

resoluteteeth
DOS isn't even an operating system in the modern sense. Once you add preemptive multitasking and memory protection, you're simply going to end up with a normal modern operating system kernel again.

On the other hand, the stuff that takes "gigabytes of disk and memory" isn't even part of the operating system kernel, so there's no need to start from DOS to get rid of that stuff. It's possible to run linux from a few megabytes of ram.

civility
> Once you add preemptive multitasking and memory protection, you're simply going to end up with a normal modern operating system kernel again.

You're missing the point. There is no single file operating system for desktop users (maybe VxWorks or some other embedded OS falls into that category, but those aren't really for desktops). Modern operating systems sprawl all over the disk. Memory protection and multitasking are not large features, and CS undergrads all over the world routinely implement them in less a semester.

> It's possible to run linux from a few megabytes of ram.

A few megs of ram and a directory in /etc filled with startup stuff and config files. Clearly you don't appreciate it, but there was something really nice about being in the root directory and seeing only command.com and config.sys. The entire rest of the machine was yours to setup however you liked. The things most people hated about DOS really had more to do with the 16 bit address space and segmented architecture.

zielmicha
OpenWRT runs on routers with 8MB of flash and 64MB of RAM and it includes whole wifi stack, routing and browser based GUI.

https://openwrt.org/supported_devices/432_warning

raverbashing
Files are cheap now and even a linux initrd file has a lot of files inside it

DOS had other files besides those, but they were hidden https://en.wikipedia.org/wiki/List_of_DOS_system_files

You could theoretically make a linux kernel will all the drivers you need linked + basic FS stored as a drive image so you would have fewer files on an image

pjc50
DOS was a configuration nightmare; you could run games that required up to about 600kb of memory, but only with a ludicrous amount of hacking that was harder to figure out in the pre-internet days.

The "rubber ducky" attack is, of course, also possible with PS/2, XT, and even ADB keyboards, because none of them were authenticated.

duncanawoods
I remember updating my autoexec.bat as a preteen but I wouldn't call it ludicrous hacking. I don't remember where I got the instructions from but I think they were just in the readmes or error messages of games, no internet required.

It pales into insignificance compared to what I have to do to keep a desktop Linux box behaving normally today. Every few weeks I'm having to paste a wodge of lines into a range of config files to fix whatever driver oddity or system resource issue has caused something to break this time. That is pretty unimaginable without the internet.

simonh
My recollection is that Tiger and Leopard were a bit ropey, which is why Snow Leopard was considered primarily a performance and stability update.
messe
> DOS was rock solid

DOS was rock solid because it did nothing. Many programs, particularly games and anything that did networking, didn't even use DOS interfaces—they bypassed them entirely and worked with either the BIOS or hardware directly. There was no memory protection, no multitasking, and, on a higher level, no permissions nor sandboxing. So while maybe it "just worked", I wouldn't call it robust.

ScottFree
Did you not watch the video? Jon covers this. An OS that does nothing to get in your way is vastly preferable to an OS that does things but does them poorly.
clord
You’re complaining that DOS lacked features and protections. But zero cost abstractions like DOS can still be robust (in the sense of being reliable and solid.) yes it requires more trust, But there is no guarantee modern OSes can run arbitrary zero-trust binaries.
kazinator
DOS did nothing and so people extended it with TSR's: "terminate and stay resident" programs. These were a nightmare. They had "conflicts": users had to experiment with the order in which they were loaded to make them like each other. Some of them were interrupt-driven and yet needed to call into DOS, which would be bad if the interrupt had gone of in the middle of DOS. So they tried to guess whether that was the case.
ynnn
If you run a buggy program on a modern OS, it won't crash the system or impact other processes. If you run a buggy program on DOS, it will write to random physical addresses, probably clobbering the state of other processes and of DOS.

Modern OSes can run arbitrary binaries, but they can pretty much run arbitrary non-adversarial binaries - problematic binaries have to be intentionally written to exploit the system (as opposed to DOS, where non-problematic binaries had to be intentionally written to not break the system).

It's a dramatic improvement.

messe
> Modern OSes can run arbitrary binaries, but they can pretty much run arbitrary non-adversarial binaries

Mostly. Modern OSes strive to run abdersarial binaries, but where they can do it safely or not is still in question, IMO.

ScottFree
> It's a dramatic improvement.

No. It's not. That's the whole point of the video. It's much, much easier to write a non-buggy program for DOS than it is for any of the modern OS's. That's because modern OS's are themselves programs that are extremely buggy and that nobody understands anymore.

mmphosis
It doesn't matter what OS I run. My "modern", and apparently buggy, CPU runs arbitrary systems that I know very little about, and I have little to no control over.

Since 2008, it's been a dramatic departure.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.