HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
"The Mess We're In" by Joe Armstrong

Strange Loop Conference · Youtube · 357 HN points · 50 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Strange Loop Conference's video ""The Mess We're In" by Joe Armstrong".
Youtube Summary
Joe Armstrong is one of the inventors of Erlang. When at the Ericsson computer science lab in 1986, he was part of the team who designed and implemented the first version of Erlang. He has written several Erlang books including Programming Erlang Software for a Concurrent World. Joe has a PhD in computer science from the Royal Institute of Technology in Stockholm, Sweden.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
I personally like "The Mess We're In" https://youtu.be/lKXe3HUG2l4
jeffreygoesto
Definitely! I also like "How we program multicores" [0], it made "iff you want the same guarantees Erlang gives you, you'll be in the same performance ballpark in any language, yes, C++ also" click for me.

Joe could explain the basic ideas and where they came from so concise and humble [1], you can probably just binge watch this whole list, one other gem being "The forgotten ideas in computer science" [2]...

[0] https://youtu.be/bo5WL5IQAd0 [1] https://youtu.be/i9Kf12NMPWE [2] https://youtu.be/-I_jE0l7sYQ?list=PLvL2NEhYV4ZsIjT55t-kxylCU...

Oct 30, 2022 · 4 points, 1 comments · submitted by tosh
mintaka5
wow. and here i am thinking i was alone in all this mess
No, it's physically impossible. You have causal consistency across services, not within services.

https://youtu.be/lKXe3HUG2l4?t=1438

blowski
Codebase A writes data to datastore. Codebase B mutates it. Codebase A loads it back in, assuming it’s still the same.

Boom. You’ve mutated codebase A’s memory.

staticassertion
I would hope it's obvious that you haven't mutated A's memory, but I'll just suggest you watch the talk.
blowski
For all intents and purposes, you’ve mutated the memory. Sure, you haven’t mutated by reaching directly to the RAM. But the effect is still the same.
nicoburns
How is that any different to calling a function on a class? That's technically not class A modifying class B's memory either. B modifies it's own memory in response to a message (function parameters) from A. The message going over a network doesn't make that fundamentally different.
staticassertion
Function parameters aren't messages. They're shared state. I'd suggest watching the talk and reading about message passing systems in general.
nicoburns
I suppose if you mutate them. But we have a linter in place and a CI system that enforces it that prevents that.
staticassertion
There are many solutions, certainly. A network is one option, which I personally prefer, but as I said elsewhere it's a "choose the right tool for the job" kind of situation.
blowski
“Watch this famous video” is not a great response. Many of us watched it years ago and seem to have interpreted it rather differently.
staticassertion
Then you interpreted it incorrectly. I'm not inclined to teach you via HN about a subject that's well documented by resources I've already linked.
I didn't say it removes state. I said it split the state up and isolated it. That's critically important - you physically can not mutate state across a network, you have to pass messages from one system to the other over a boundary, either via some protocol like TCP or via intermediary systems like message brokers.

Joe Armstrong talks about this better than I'm going to: https://youtu.be/lKXe3HUG2l4?t=1438

That timestamp is rough, I just found a related section of the talk.

> And the network-defined state is a hell of a lot harder to trace and debug.

There's no such thing as network-defined state. I assume you're saying that it's harder to debug bugs that span systems, which is true, but not interesting since that's fundamental to concurrent systems and not to microservices.

xorcist
I think you have a very narrow idea about what "mutating state" really means. You seem to talk about DMA access only. But you can manipulate the state of an application by writing to a shared data store, by calling an API, and countless other ways. It is really more of a concept for us humans to define where an application begins and ends.

Let's take an example. If we have two services that wants to keep the full name of a logged in user for some reason, that piece of state can be said to be shared between the applications. Should one service want to change that piece of data (perhaps we had it wrong and the user wanted to set it right), the service must now mutate the shared state. It does not matter whether it is done by evicting a shared cache or if we write the updated data to the service directly, we still speak of a shared state that is updated.

Now we can stipulate that the more of these things we have, the more coupled two pieces of software is, which generally makes reasoning about the system harder. It is not as black and white as one type of coupling is considered acceptable and the other isn't, but some types are easier to reason about than others. Joe really thought hard about these things and it really shows in the software he wrote.

staticassertion
We all share state in that we all exist within the same universe. But the universe has laws of causality, and Joe advocated that software should always maintain causal consistency.

A database is not needed for your example. You could replace it with an actor holding onto its own memory. But all mutations to that actor, which the other actors hold references to via their mailbox, are causally consistent and observable.

That is the premise of the talk I linked elsewhere.

Apr 18, 2022 · 1 points, 0 comments · submitted by tosh
Feb 07, 2022 · tosh on Vue 3 as the New Default
Same here, this is how I found out about Vue 3

related: The mess we're in (a talk by Joe Armstrong)

https://www.youtube.com/watch?v=lKXe3HUG2l4

(yes, I know, I should have vendored the dependency in the first place)

Feb 07, 2022 · 1 points, 0 comments · submitted by tosh
For those that enjoyed the video you linked, Joe Armstrong's "The Mess We're In" talk could be interesting as well https://www.youtube.com/watch?v=lKXe3HUG2l4
amosj
I second this, this video is amazing. It had a big impact on my thinking the first time I saw it
See talk "The Mess We're In" by Joe Armstrong (Erlang co-creator) https://www.youtube.com/watch?v=lKXe3HUG2l4

The tl;dr version is that there are 10^80 atoms in the known universe, a computer with 1MB of memory can be in 2^1,000,000 or 10^300,000 possible states. Worse with a thousand times more memory. The easiest, quickest, perhaps the only, way to get it into a known state is to reboot it.

(Amusing introduction: "I took Friday off to be at home, because I can't get any work done at work, too many interruptions")

(I like Joe Armstrong's sense of humour here, dry, understated: "A prompt came up saying 'there's a new version of Open Office' and I thought 'That's good, it'll be better'.".)

Luddite is someone who is opposed to new technology, which doesn't apply here as the sentiment isn't about being opposed to the new tech but about the new tech not being taken advantage of.

The software we build today is "vastly" more powerful in the sense that, strictly speaking, you can do more stuff, but it is way less powerful - and often broken - than what it could be, considering what you can see being possible on older hardware and what seems to be done in modern hardware.

Also it isn't really something that happens today, here is a nice (and funny) talk from Joe Armstrong on the topic from almost 8 years ago[0]. Though this sort of sentiment goes way back, e.g. Wirth's classic "Plea for lean software" article from 1995[1].

(Joe's talk is more about the broken part and IMO he misidentifies the issue being about trying to make things fast when the real issue -that he also mentions- is that pretty much nobody knows what is really going on with the system itself)

[0] https://www.youtube.com/watch?v=lKXe3HUG2l4

[1] https://cr.yp.to/bib/1995/wirth.pdf

cageface
This is because people don’t want to pay the cost of software carefully developed to take maximum advantage of modern hardware. In specific niches like audio production where this is desired then prices run into hundreds of dollars for a single plugin to cover the development costs.
hedgewitch
And even then, quite a lot of that audio production software is very, very inefficient, primarily due to poor GUI development. There are a few popular choices of software that are very well-known for being CPU hogs, even relative to more complex software.
FleaFlicker99
Particularly the business often doesn't want to take on the goal of improved performance when good-enough will suffice. Which can often make sense when you factor in increase development costs, reduced flexibility/maintainability, and reduced ability to recruit for people with the skillset to work on such things.

Then again, performance is often a feature in itself. In some cases it can open whole new areas of potential business. Often times it isn't even particularly hard to achieve, it just requires decent engineering practices.

Unfortunately good engineering practices can be hard to find/hire for, especially among a development community/culture that hasn't had to bother caring about performance for a long time.

Karrot_Kream
It's always this way. Just like most people are happy with Ikea furniture, so most people are happy with the equivalent of "Ikea software". It's good enough. For folks who _are_ willing to pay, you can buy everything from low latency audio gear/software to dedicated Internet bandwidth to high reliability SBCs.
Thanks for the Paper! I'm looking forward to spend my weekend with it.

Normally I'm a huge fan of a philosophy of backwards compatibility. Clojure has shown that it is possible with enough dedication. But I feel like it's a liability with the lineage you describe for Rust, the problem space was just too unexplored and vast, and the language has grown too large.

Joe Armstrong has a joke in one of his keynotes [1], that after you've written the first version of something you should throw it away completely, because you didn't understand the problem at all when you wrote your solution, and now you understand it a bit better.

A bit like how the web would be a much less ghastly ecosystem if we hadn't abandoned versioning for living documents which force every behaviour ever to interact, and god forbid be compatible, with one another, in a combinatorial hairball.

Modern browsers would be less unwieldy if they just shipped 30 different rendering engines, one for each major revision, than ship a blob that purportedly is able to still render everyones Dogs website from the 90s (which is not true of course, it's just that nobody from the 90s is here to complain).

It's a bit ironic that Rust fell into the same trap that caused the complexity that inspired its inception in the first place.

Maybe once the major "features" for safe system programming were identified a new language (revision) should have started with only those features at its core. I fear that if such a language were to crop up now, Rust would just eat its children.

[1]: https://www.youtube.com/watch?v=lKXe3HUG2l4

steveklabnik
Any time :)

We didn't have backwards compatibility in those days. Radical change happened. I personally think of Rust as being three or four different languages before 1.0 happened. The problem is the exact same as others have said; doing that degree of breaking change all at once is effectively starting a new language project, which means that you also throw away all of the community and things that they've built so far.

Also, Rust does have a kind of "core" semantics, that is, MIR. You can't code in it directly, but it is a rich target for verification and analysis tools.

j-pb
> throw away all of the community and things that they've built so far.

The sad truth is probably, that Rusts complexity is just a mirror of the entire industry.

If we, as a profession, placed a lot of value onto tiny, simple, yet well thought out (and therefore powerful) things, rewriting everything once a year wouldn't be as much of a problem.

No codebase would have more than 5kLOCs, Browsers would only do layouts with the thing we build after learning from Flexbox, Grid, and Column, and Rusts language spec would fit into a white-paper like Scheme did.

At least Rusts borrow checker uses such a tiny, well thought out, piece of code with its datafrog engine.

steveklabnik
It sounds like you'd like the work Alan Kay was doing with VPRI, though in my understanding that is over now.
j-pb
Definitely! Although I'm not sure if Morphic[1] is the way forward for UX on the web, despite it only being 10kloc. XD

[Anybody stumbling upon this comment, and wondering what Alan Kay is/was up to, this might be a good philosophical starter: https://www.youtube.com/watch?v=FvmTSpJU-Xc]

1. https://lively-kernel.org

mwcampbell
The problem with such a focus on smallness and simplicity, particularly at higher layers of the stack, is that it requires throwing out lots of people's must-have features. For me and my blind friends, that feature is a screen reader. For someone else, it's right-to-left or bidirectional text rendering (hello TrojanSource). For someone else, it's complex writing systems or support for Unicode characters outside the Basic Multilingual Plane. And so on. I doubt that any UI framework that fits in 5000 lines of code or less would support all these things. Sure, some of the complexity in the industry can be eliminated, but some is truly necessary. One good thing about Rust mirroring the complexity of the industry is that at least one Rust GUI toolkit, Druid, is taking accessibility and internationalization seriously. (Disclosure: the team at Google that's funding a lot of the work on Druid is also funding me to work on accessibility.)
Oct 19, 2021 · 3 points, 0 comments · submitted by spenczar5
Joe Armstrong in his talk "The Mess We're In"[1]:

> Robert Virding, who developed Erlang with me, was famed for his comment... I think he only... Singular... The entire stuff he wrote had one comment. In the middle of a pattern-matching compiler was a single line that said "And now for the tricky bit"

[1]: <https://youtu.be/lKXe3HUG2l4?t=645>

Jun 10, 2021 · 2 points, 0 comments · submitted by fagnerbrack
Not necessarily: https://www.youtube.com/watch?v=lKXe3HUG2l4

Doesn't help that node really is a shitshow.

fagnerbrack
Amazing talk, do you have more of it? Please mail to hn at fagnermartins.com
May 04, 2021 · 4 points, 0 comments · submitted by tosh
The browser itself takes far longer than a day or two to learn to program even without tooling, considering that content is roughly the size of MDN; there is so much js tooling and documentation and of such wide variety that a year or two would not be enough for even a light review (roughly the content of github/js). And even that is not enough time to develop heuristics for deciding between approaches. And it doesn't begin to touch tooling, like editors, or deployment and runtime.

Here is a video of a lovely man who invented Erlang, Joe Armstrong, expressing confusion about how to build a "modern" javascript program: https://www.youtube.com/watch?v=lKXe3HUG2l4 (hilariously he was using grunt - a tool which is now entirely out of favor).

scambier
Come on, this isn't that hard if you have minimal, or even outdated, web dev knowledge. There is like 3 major frameworks (Angular, React, Vue). Spend a few hours to look how it is to work with each one, make your choice and follow a tutorial. Congrats, you've made your first Single-Page App.

Of course if you've never written a line of code in your life, or if you're angry at the thought of using a dependency manager, you're going to have a hard time. And not only with the web stack.

dawnerd
In their defense it’s been a year and I’m still learning React. Please don’t act like it’s something you understand in a few hours because it’s not. You might be able to copy-paste the example code and get something that works and sorta seeing what happens but to really understand takes a lot of time.
azangru
Assuming that this vanilla js should be familiar for anyone who has only ever written js that interacts with the DOM api:

    document.body.innerHTML = `
      <div>
        Hello world
      </div>
    `;
the React equivalent is no more complex:

    const HelloWorld = () => (
      <div>
        Hello world
      </div>
    );

    ReactDOM.render(<HelloWorld />, document.body);
Add to this a couple of high-level concepts, i.e. that a React component re-renders if its arguments (props) or its internal state change, and you are 70-80% there. Learn the useState and the useEffect hooks, and you are ready to be productive. If you aren't ready for build tools yet and just want to play with the library, take Preact, which has a syntax option that does not require a build step [0]

How many hours should this take?

[0] https://preactjs.com/guide/v10/getting-started

dawnerd
And yet you completely just proved my point.
MrQuincle
The problem is that React might not exist over a couple of years.

I program for decades. Most of my old stuff using frameworks - say for example, Angular or Jekyll - doesn't work anymore. If I update I end up in a dependency hell that's even worse. It's like XSLT in XML. It will pass. :-)

azangru
> The problem is that React might not exist over a couple of years.

I agree with you in principle, but I think I disagree in details. Programming using standard web apis certainly feels more future-proof, and all the power to those who have adopted web components. But React doesn't look like it's going away within the next decade. And even if Facebook somehow implodes, and no other company steps up to support the work of the React core team, React's api has been copied by Preact, which has Google's backing; and JSX has spread even wider. Importantly, React in itself is larger than DOM — it is a reconciler that can be used for canvas/webgl (react-three-fiber), for mobile applications (react-native) and windows applications (react-native-windows). So there is a good indication that React will be with us for a while.

But again, I completely agree that many projects that get started with React don't need to have been.

> I program for decades.

Two decades ago, Perl was a good choice :-) And look where it is now.

We are lucky with the rigorous backwards compatibility of HTML, CSS and javascript. But planning for decades may be a bit of an extreme. I wouldn't like to inherit a frontend project that was written two decades ago and hasn't changed since then.

abraxas
If you think one gets to know React or Angular because one can follow a "Getting Started" tutorial for a couple of hours and get it to show an example page then you're in Dunning Kruger territory.
scambier
I never said that a tutorial was enough to master a framework, please.

The comment I was replying to was saying:

> there is so much js tooling and documentation and of such wide variety that a year or two would not be enough for even a light review

There's no need to learn the entire ever-evolving JS ecosystem. You pick a sane starting point (a framework tutorial) to get you up and running in a few days, and from there you learn what you _need_.

abraxas
The trouble for newcomers starts at _which_ framework to hang your hat on. Unless that decision has been made for you by your employer choosing between, Vue, React, Svelte, Alpine, Angular or some additional ankle biters is overwhelming. What's the selection criteria? What's the payoff vs complexity cost of each? What's the support level? Is it coming in vogue (Svelte) or out of fashion (Angular)? Does it mandate additional tools/transpilers/packagers etc and how many? How hard are they to learn? Do they play well with some other tooling that is being used (say Typescript)? Finally how does it all get packaged as an app/website to publish?

It's a complicated mess and even seasoned JS developers struggle to answer the above questions. Oftentimes the justification for choosing a framework is that they had read about it in a forum like HN and wanted to try it. Nothing wrong with that for someone who lives and breathes those things but most people want to just get a CRUD UI up, these things are a massive time sink for what they purport to offer.

azangru
If, like you say, these people aren't frontend specialists and don't pick frameworks on the basis of what appears on the front page of HN, why should they reach for a framework as their first port of call anyway? Many conversations on HN seem to turn into an argument between old-school developers, who declare that there's nothing better than good old jquery and Bootstrap, and more hip developers who immediately drag in React, Vue, or Svelte. The middle-ground option of starting with plain modern JS and plain modern CSS is offered relatively rarely. Which is strange — this should be the default for those who are disoriented by the plethora of options on the frontend.

Especially since you suggest that such developers just need to get a CRUD UI up and running. You don't need a js framework for that.

OhSoHumble
So I've looked into React and the problem I have with it is that it's lauded as "simple" because it's "just a view kit."

However, there are an infinite amount of questions one has to answer when starting a new project:

- I see that state management libraries are in a lot of tutorials and talked about online. Which one do I pick? Redux? MobX? Something smaller? Do I skip it entirely? What are the consequences of that?

- Okay, so how do I deliver "pages" in React? Do I pull in React Router as well? I saw an article on using Hooks to replace RR. Do I do that instead?

- How do I handle authentication? Do I use JWT? Which library? I saw some comments on Hacker News that says JWT is terrible. Do... do I pick something else? I kind of need auth so I cant's skip it. What if I need to do OAuth?

- Do I use TypeScript? What is the scope of work for implementing it into my build pipeline? Is it worth it?

- Build tools. Sure, WebPack is the most popular but a lot of tutorials are for outdated versions. A lot of the times I'm googling "how do I do X webpack" and just get different JSON blobs to plug in.

It's just... so much. I just don't care. I don't want to configure Gulp or WebPack or whatever. I don't want to extensively research every dependency I have to pull in to get a working app. There is Facebook's create-react-app and that's a good start. Vue and Angular don't suffer as much from this because they focus more on being complete packages.

Doesn't matter to me though. Phoenix LiveView provides enough interactivity for me.

azangru
All fair questions :-) But you don't sound like a frontend developer. Are you writing something with a UI so complex that you would need React, Vue or Angular to implement it?
Mar 23, 2021 · svieira on Tz: A Time Zone Helper
Your travails reminds me of Joe Armstrong's talk _The Mess We're In_ where he talks about the pains he had getting his slides for the talk ready:

https://www.youtube.com/watch?v=lKXe3HUG2l4

I like to watch talks by programming language designers. Some that readily come to mind are Rich Hickey, Joe Armstrong (RIP), and Stefan Karpinski.

Some of my favorites are:

- Simple Made Easy (Hickey): https://www.youtube.com/watch?v=oytL881p-nQ

- The Mess We're In (Armstrong): https://www.youtube.com/watch?v=lKXe3HUG2l4

- The Unreasonable Effectiveness of Multiple Dispatch (Karpinski): https://www.youtube.com/watch?v=kc9HwsxE1OY

See "The mess we're in" talk by Joe Armstrong (Erlang creator) - https://www.youtube.com/watch?v=lKXe3HUG2l4 - from 6 years ago.

Your computer with 1TB storage has ~2^8trillion possible states it could all be in.

Number of states you could count through if you burnt up The Sun trying: ~2^128 or so, tops.

Atoms in the universe: ~10^80.

Aug 26, 2020 · 1 points, 0 comments · submitted by amgreg
Thanks for the right wording.

Yeah, the tool is fine, I got nothing against it, just our attitude makes our world more cluttered and fragmented, we lost the way to communicate in programming world at large.

The Mess We're In - Joe Armstrong - https://www.youtube.com/watch?v=lKXe3HUG2l4

dahart
I'm not sure I understand your objection. Shelly is a programming language, inspired by Logo, and it also comes with a browser based environment.

Isn't half of the communication problem you're referring to demanding that others use words the way you want them to, as opposed to listening, understanding that language is fluid, and trying to hear what they're saying instead of tell them they're wrong?

jraph
Your last paragraph is very important. Approaching life like this makes things so much easier and enjoyable for me.
Jul 10, 2020 · 53 points, 3 comments · submitted by tosh
dang
The mess we were in at the time: https://news.ycombinator.com/item?id=8342755
CKN23-ARIN
RIP Joe Armstrong
emmanueloga_
Unison directly implements the idea Joe Armstrong was talking about right? [1] ("use hashes instead of names").

1: https://www.unisonweb.org/docs/tour#%F0%9F%A7%A0-the-big-tec...

Jul 08, 2020 · memexy on Identity Beyond Usernames
Generally, using cryptography or associated cryptographic functions is the way to go when trying to make robust systems. Joe Armstrong has a great talk where he outlines how to create a content addressable store for storing and working with knowledge/data. He suggests using SHA256 content hashing because giving items of data unique names is a hard problem so we might as well name pieces of data by their content hashes and then have a human readable pointer.

--

https://www.youtube.com/watch?v=lKXe3HUG2l4

Jun 14, 2020 · 2 points, 0 comments · submitted by psychanarch
This is what lead Joe Armstrong (Erlang) to say that when you include memory and storage, no two computers on the planet are ever in the exact same state unless they've just been reinstalled or restarted. This is why software never quite works the same way twice and internet instructions for fixing it never seem to work for you either.

[] https://www.youtube.com/watch?v=lKXe3HUG2l4

That reminds me of this talk: https://www.youtube.com/watch?v=lKXe3HUG2l4
Mar 04, 2020 · 1 points, 0 comments · submitted by heartbeats
> How did software get so reliable without proof?

Simple, software started shipping with more bugs in backlogs :-P

— More seriously, software today is endemically crappy, and often poorly designed. I dunno why Hoare thought it was any better. The only saving grace is that (modern) software largely stays away from serious stuff. Eg: The airline industry is reluctant to upgrade software from decades ago, but happy to incrementally upgrade other pieces of the system on shorter cycles. Then of course, we have systems like the Boeing 737Max MCAS where the software did what it was supposed to (taken literally) but the software system was poorly designed.

EDIT: Just remembered this fantastic talk by Joe Armstrong https://youtu.be/lKXe3HUG2l4

The smooth running of every abstract system depends crucially on the (human) operator handling the point of contact with reality, in practice, often bending reality to make it tractable for the system. Any bureaucracy would grind to a halt if it wasn’t intelligent humans carrying out the processes! Software is no different.

Just like it’s hard to take a technology from “zero to one”, it is hard to take the amount of necessary human oversight from one to zero. For this reason, I would much rather think of software as amplifying the capabilities of that human, rather than automating away other humans. In practice, these systems will end up needing a highly skilled and trustworthy human operator to shepherd them — might as well design the system to make it maximally easy for those operators to understand/debug/tweak.

WalterBright
> software today is endemically crappy, and often poorly designed

So is every other engineering product that hasn't gone through years (decades) of evolutionary refinement.

CivBase
I don't buy it. I think the real problem is that software became one of the worlds biggest commodities practically overnight and developers are usually more incentivized to get stuff done faster rather than produce higher quality.

There's practically no material cost to software development, the proliferation of the internet makes patching trivial, and the overwhelming majority of software amounts to an nothing more than an inconvenience if it fails. Considering the high value of software and the relatively low cost of bugs, it's only natural that speed is valued so much more than quality.

EnderMB
I'd go even further with this.

I'd say that most software engineers are ultimately powerless to improve software beyond a certain point because doing so would cost more than their employer is willing to spend.

Given enough time, any software engineer could make a reasonably solid product/service that would stand the test of time. Most projects are run on a deficit of either time or resource, and as a result we end up with cut corners.

imtringued
New categories of software are created every single day. Things like databases, compilers, filesystems that belong to the well known categories are all rock solid compared to your average iPhone or Android app whose app stores have existed for slightly more than a decade.
lloeki
Ironically it seems people as well as society as a whole fails to account that a zillion times inconvenience is death by a thousand cuts. This includes everyone writing code.

Goddammit when you take a long, hard look, objectively (like really) stepping back for a moment, every piece of software (and hardware to some extent as the line between both is getting blurry) around us is just a huge pile of Rube Goldberg machinery that barely happens to work on the happy path.

This process is not entirely unlike global warming and similar large scale psychological risk assessment failures.

sitkack
And each individual piece might not be mission critical, but some mission critical component of the system relies on some non-mission critical piece of software or service.

The more I look at how fragile all human created systems are, I realize that humanity will perish with the equivalent of tripping over its shoelaces. Own goal!

collyw
The problem I see is that we keep throwing away things that work well and are understood.

Relational databases will work well for most products, its understood how to design them well, optimize them, tune the servers. But nowadays we need to stick everything into microservices on a Kubernetes cluster with Mongo instead and we are told that this is progress.

averros
Not really. When programming was done by real engineers (rather than "coders", "developers", or horribly misnamed "software engineers") - with proper engineering discipline which involved deliberate design and documentation rather than "agile" hacking, it was more reliable. By far.

My personal recent experience of actually doing software the old-fashioned way involved writing correctness critical high-performance code for a major data warehouse vendor, which mostly stayed with zero known bugs in production - greatly contributing to the vendor's reputation as reliable and dependable place to keep your data in.

And how do I know what the old-fashioned way to write code is? Well, I've been writing code professionally for nearly 40 years.

ColanR
The more I learn about modern software practices, the more I come to think that software is worse today because programmers are poorly disciplined and badly trained. Of course, that's likely because the barrier to entry is so much lower today than it used to be.
WalterBright
The barrier of entry to software has been zero ever since the 8080 was introduced.
fxtentacle
Back then there were a lot less stackoverflow copy & paste mistakes.
akiselev
There were also far fewer users to really discover the nasty edge case bugs.
speedplane
> There were also far fewer users to really discover the nasty edge case bugs.

If you consider why a company makes something reliable or not, it's a relatively simple formula:

    = Number of Users x (Benefit of Getting it Right x Probability of Getting it Right) - (Cost of Getting it Wrong x (1 - Probability of Getting it Right)))
As the number of users in any system increases, the cost overall cost of getting it wrong also increases. You can then devote more fixed cost resources to improving probability of getting it right.
hhhhhhh4
Old fashioned way?

You mean all that softwarte written in '90s with no security in mind?

I bet you're talking about outliers. Nowadays average developer practices are levels above that what was decades ago while being supported by great tooling.

gambiting
I feel like there are two separate things going on here.

You can have an extremely reliable piece of software running say, an industrial lathe or a stamper or book printer or whatever - software which can run 24/7 for years if not decades, software which will never leak memory, enter some unknown state or put anyone in harms way - and yet have zero "security", because if you plug in a usb keyboard you can just change whatever and break it entirely. Software which has no user authentication of any kind, because if you are on the factory floor that means you already have access anyway because the authorization step happens elsewhere(at employee gates etc).

It's like people making fun out of old ATMs still running Windows XP, because it's "not secure". If the machine isn't connected to the internet, reliability is far more important - who cares windows XP is not "secure" if the ATM can run constantly for years and reliably dispense money as instructed and there isn't a remote way to exploit it.

I feel like that first kind of software(the reliable kind) is far rarer today - people just throw together a few python packages and rely on them being stable, without any kind of deeper understanding of how the system actually works, and they call themselves software engineers. The "security" part usually also comes as a side effect of using libraries or tools which are just built with "security" in mind, but without deeper understanding what having truly secure software entails.

viraptor
It really depends on the scenario. When I wrote software for load balancing phone calls, it was minimal, had a well defined state machine, passed all the static testing I could throw at it, etc. At the same time, I wrote some crappy web service code which could fail and get retried later, because nobody would see that. If the worst thing that can happen is that one in a million visitors will get a white page, it doesn't economically make sense to do better. Even if you know how and have the tools.
tonyedgecombe
I would find it very hard to bring myself to do that. I won't knowingly write incorrect code even if the chance of failure is very small. Luckily I don't have a boss breathing down my neck telling me not to waste time.
viraptor
I don't think anyone knowingly writes incorrect code. But you can spend between 0 and infinite time thinking about whether the code is correct. At infinite you never release, so you don't need a boss to have some reasonable time limit. If this is your non-hobby work, you need to decide when to stop looking and accept the potential issues.
WalterBright
> it was more reliable. By far

[talking about non-software engineering] There's a huge difference between reading books about proper engineering and how it is actually practiced. Much of the sloppiness is covered by simply over-engineering it. Designs are constantly improved over time based on service experience. Heck, lots of times the first machine off the line cannot even be assembled because the design dimensions on the drawings are wrong.

The idea that non-software engineering is done by careful professionals following best practice and not making lots of mistakes is painfully wrong.

clarry
As an ex-machinist, I can confirm that bad drawings are a thing.

But then, non-software engineering is a wide field. There are products that you can afford to iterate on, and then there are very expensive (and potentially very dangerous) projects where you generally can't afford many slip ups.

If your engineers make lots of mistakes (which aren't caught in time) in a project that costs millions and can't be replaced by another unit off an assembly line, that's kind of a big deal. Thankfully, we don't hear about bridges, skyscrapers, heavy industrial lifts or nuclear power plants failing all that often.

WalterBright
The things you mention are over-engineered by wide margins to cover for mistakes.

The first nuke plants are pretty dangerous by modern standards, and we know how to fix them, but because the first ones are dangerous we are not allowed to build fixed ones.

The Fukushima plant, for example, had numerous engineering faults leading to the accident that would be easily and inexpensively corrected if we were iterating.

Airplanes are a good example of how good things can get if you're allowed to iterate and fix the mistakes.

machawinka
I doubt software engineering is engineering discipline. At least not the way we are doing it now. We are not engineers, we are craftsmen. It is a miracle that the software that drives the world more or less works. So far there hasn't been any major catastrophe due to software but it will happen. Software sucks.
hackyhacky
People have been building bridges for thousands of years. We've been doing software for barely two generations. Try to imagine what the first generation of bridge-building looked like. Don't worry: we'll catch up. Give us a few thousand years.
imtringued
The fact that our CPU designs and languages keep changing every year doesn't help. I'm sure there are lots of Z80 experts that know how to develop high quality software for that processor but nowadays we need to support x86, ARM and soon RISC-V. This forces us to use Javascript, Java or Kotlin which are cross platform by default. However, Kotlin is a very new language. Javascript and Java are currently undergoing rapid iteration (Java 14 is out!).
imtringued
It only takes one bit flip or a single typo to break your program. It only takes a single incorrectly wired transistor in your CPU and it all comes crashing down. Yet somehow it keeps working. The answer is that making decently reliable software is easier than you think.
dehrmann
> It is a miracle that the software that drives the world more or less works

It's more of a survivorship bias.

ci5er
Over time, I think I am coming to agree with you.

I went to an engineering school. People who designed bridges had a wide, but constrained parameter space, and well-accepted design patterns.

I started out in semiconductor (MCU) systems and sometimes circuit design, and we had a broader (but not yuuuge!) parameter space, but it was growing as transistors got cheaper. Less well-accepted design patterns, because what you do with 50K transistors and 500K transistors and 5M transistors and 50M transistors - to use effectively - you need different patterns - and that changed so fast!

I did software-ish things with my h/w teams, and they would mock me because "software is easy". "You can do anything". And to do a rev, you didn't need to burn a $50K and 6 weeks (or whatever it is now) at the fab for each turn.

The problem with software is that it is SO unconstrained. You truly CAN do anything - except engineer in an unconstrained environment. I guess this realization (and Python blowing up on me in run-time environments) have taught me that: Constraints suck. But they are good. Software could use more of them, because they force discipline at dev time, at compile time, which reduces blow-ups at run-time.

a_wild_dandan
I think that's why frameworks are so popular: They extensively constrain the solution space. Folks lament their usefulness. ("You don't need React for that! Just use jQuery.", "Why not use raw SQL instead of an ORM?") I don't mean to belittle framework criticisms. Leaky abstractions, over-engineering, and peculiar implementations encouraged by a framework's model are legitimate commonplace warts. We've seemingly converged on these super-APIs because flowing through such a narrow (but not too narrow!) solution space is worth the relatively minor headaches.
WalterBright
> I went to an engineering school.

My father attended MIT in the 1940s. He said that the engineers at MIT designed the new chemistry building, with all the latest features to support laboratories. Only when the building was completed and scientists were moving in did anyone realize that there were no toilets.

ci5er
Ha!

No women - what did they care? :-)

redis_mlc
Rumor has it that the U. of Waterloo was designed without including the weight of ... books. So the building is sinking.
sachdevap
This rumor also exists for the Robarts Library at UofT, and in both cases, I am quite certain, it is just that - a rumor.
raxxorrax
Engineering has a long tradition of trial and error. That is how the field of architecture got its current knowledge that currently probably focuses more on material research.

I don't see how software is different. We try new patterns, languages, deployment techniques and think about the optimal way to store data with specific operations in mind. And here you cannot just do anything if you want to solve a problem efficiently.

If we find better practices or more use cases, the environment becomes more restricted. We already have quite a few constrains for a relatively young field because there are many developers.

On the contrary I believe engineering to be a creative process. Of course lacking constrains might make it seem unfocused, but I still call that engineering.

floriol
Strangely, I am on the opposite site - that programming is more of a subset of mathematics (or at least should be) than engineering and since theoretically many parts of the program can be inferred without running, trial and error should not be as acceptable as it is (I don't mean it on a small scale like I do feel sometimes all I do is try different word combinations until the compiler is happy :D).
atoav
> Engineering has a long tradition of trial and error. That is how the field of architecture got its current knowledge that currently probably focuses more on material research.

Imagine you are building a cathedral and every month the material you use changes to something that behaves totally different. While there is knowledge you can extract even when the material you work with and the fundament you work on seems to change every time, we live in a world that has given up to build systems that last.

In times where deploying a patch meant sending out CDs, software that didn't have to do that was quality software. Nowadays that feeling has flipped: if software doesn't offer updates at least once a month people feel like there is something wrong with it.

To me it feels like we collectively gave up on building things that last in software.

WalterBright
The grand cathedrals in Europe do sort of last, but not without continuing efforts to keep them from collapsing. Nearly everyone I've looked at had centuries of reinforcement of various kinds added.
etripe
> Constraints suck. But they are good. Software could use more of them, because they force discipline at dev time, at compile time, which reduces blow-ups at run-time.

Can I take that to mean you're in favour of compile-time enforced strong and static typing?

AdmiralGinge
Not the parent, but that would be very much a yes from me. Writing something safety-critical in a language like Python or JS would terrify me.
pyrale
If you enjoy having a language which enforces some constraints on you, you may enjoy languages of the ML family.

Elm, for instances, enforces constraints such that the generated js never crashes your browser. Haskell has evolved a powerful system that lets you check web apis against their implementations, which can be pretty handy to enforce that clients should still be able to consume it.

agumonkey
I can recall a few articles where software felt like engineering. bits derived data needs, complexity based hardware sizing. Real solid constraints to use as foundations. rare
WalterBright
> People who designed bridges had a wide, but constrained parameter space, and well-accepted design patterns.

And yet they still fall down due to stupid mistakes that nobody noticed.

https://www.usatoday.com/story/news/nation/2019/10/22/design...

ci5er
Yes, they do. And they make the news.

I was always amazed about the stories of resonant frequency issues, not just from walking but also from wind! - https://en.wikipedia.org/wiki/Tacoma_Narrows_Bridge_(1940)

Mar 01, 2020 · 2 points, 0 comments · submitted by pjmlp
Philip Guo wrote a great post several years ago under the title "Helping my students overcome command-line bullshittery"[1] that seemed to get somewhat mixed but mostly positive reception. Much of the negative reception seemed to be chained to sophomoric arguments originating from folks stuck in the second panel of the glowing brain meme who wrongly thought of Guo being stuck in the first.

The real truth behind the mess we're in[2] is that there is a ubiquitous, universal runtime that almost every computer comes equipped with, and the problem lies with the folks responsible for those ecosystems who either don't see these things as problems, or somehow believe that what the future somehow holds is native support for R/Python/what-have-you in the browser.

Tooling is a massive problem, though, and one that the browser vendors themselves don't seem to care to get right. (Although there is the Iodide project, in part supported by Mozilla.) And it really doesn't help that the browser realm has come to be conflated with the NodeJS community because they share a common language.

I've written a fairly thoughtful post[3] before, tying these two topics together:

> After finding out where to download the SDK and then doing exactly that, you might then spend anywhere from a few seconds or minutes to what might turn out to be a few days wrestling with it before it's set up for your use. [...]

> the question is whether it's possible to contrive a system (a term I'll use to loosely refer to something involving a language, an environment, and a set of practices) built around the core value that zero-cost setup is important

1. http://pgbovine.net/command-line-bullshittery.htm

2. https://www.youtube.com/watch?v=lKXe3HUG2l4

3. https://www.colbyrussell.com/2019/03/06/how-to-displace-java...

Agree, but people tend to stop at Fowler and live it as gospel. There is a profound richness in the different ways to think and talk about computers, engineering, and information science, far more wonderous than any one person can catalog. Read SICP, Knuth, Code Complete, Codd. Hell, just YouTube around. If your vocabulary is prescribed by literally one guy in a fedora, you gotta get out there. Might I suggest Joe Armstrong [1] :)

[1] https://youtu.be/lKXe3HUG2l4

Ozzie_osman
Agreed. But that's a fault of content consumers not producers, no? I have read SCIP (in college), Code Complete (first book I got in my first job), Gang of Four (2nd book), Clean Code, etc.

Unfortunately we are in an age where blog posts and videos are a lot more accessible than dense books. That sucks, and means we all have to expend extra energy to find a mix of good and diverse ideas and thoughts.

camgunz
> But that's a fault of content consumers not producers, no?

Yeah, mostly. But at least a little is that Fowler writes and presents pretty authoritatively. And he should, because that's what consultants do, but we should remember that he's a consultant for a very niche area of software ("Enterprise") whenever evaluating his advice.

wst_
But then, very often, he also make notes that there is no one good design and, when presenting an ideas, he also presents pros and cons of given solution.
charlieflowers
> Agree, but people tend to stop at Fowler and live it as gospel.

I know people do that, and it sucks. But I never got that from Fowler himself.

There are other authors (not naming names) who are pretty dogmatic often. But Fowler seems to genuinely try to capture patterns and techniques in an open-minded way.

It's the abuse of his material that is the problem.

camgunz
Yeah he's no Zed Shaw. But actually I think Fowler is... well he skips a lot of steps and makes a lot of assertions. Here's an excerpt from the intro to Refactoring:

> The performance of software usually depends on just a few parts of the code, and changes anywhere else don't make an appreciable difference.

> But "mostly" isn't "alwaysly." Sometimes a refactoring will have a significant performance implication. Even then, I usually go ahead and do it, because it's much easier to tune the performance of well-factored code. If I introduce a significant performance issue during refactoring, I spend time on performance tuning afterwards. It may be that this leads to reversing some of the refactoring I did earlier--but most of the time, due to the refactoring, I can apply a more effective performance-tuning enhancement instead. I end up with code that's both clearer and faster.

> So my overall advice on performance with refactoring is: Most of the time you should ignore it. If your refactoring introduces performance slow-downs, finish refactoring first and do performance tuning afterwards.

One way to read this is basically what he writes: clear code is more important than efficient code, and clear code can help you make code more efficient if that's important.

Another way--indeed the way I read it--is "performance isn't the most important thing, and at least some of the time following my advice, performance will suffer, even if you undo a lot of the work I'm recommending you do." And I can respect that, but what he writes tries to have it both ways. At least when DHH makes this argument, he basically just says "yeah, buy more machines, they're cheap, engineers aren't." This "incidental" absence of drawbacks is just one of the things that makes all of Fowler's writing feel like marketing.

Another is all the hand waving assertions. There's a lot of it even in those three short paragraphs:

- How did he measure "much easier to tune the performance of well-factored code"? Honestly how would you even measure that?

- How did he measure "more effective performance-tuning enhancement", and also what does "effective" mean?

- What does "well-factored" mean?

- What does "clearer" mean when he says "I end up with code that's both clearer and faster"

Further, he reinvents and renames things. His (short) article about "Application Boundary" [1] is actually Conway's Law [2], which was nearly 40 years old at writing. And OK, no one knows everything. But that piece is written very prescriptively; here's an excerpt:

> We can draw application boundaries in hundred arbitrarily different ways. But it's our nature to group things together and organize groups of people around these groups. There's little science in how this works, and in many ways these boundaries are drawn primarily by human inter-relationships and politics rather than technical and functional considerations. To think about this more clearly I think we have to recognize this uncomfortable fact.

How can you say "there's little science in how this works" immediately after "it's in our nature to..."? And for what it's worth, there's plenty of work around this; look at any discussion of permission/capability systems, consensus systems, sandboxes, etc. There is literally science about it.

I have no doubt that Fowler is earnest and well-meaning, I appreciate his contributions to (what I reluctantly accept is) my field, and I'm happy that he's frequently many people's gateway to thinking more critically about software. But he's far from the final word on it, and too often he writes like he is.

And, at the risk of making an over-long post even longer, there are two problems with that. I've already been explicit about the first: people take it as gospel and don't look further. That's not great for our field.

But even worse, it ends up changing our culture from one of wonder, exploration, and experimentation to one of prescription and certitude. I have no idea what the best way to factor a program is; it's very hard for me (or anyone) to say why one expression of information or a computation is "better" than another, and that is an endless source of delight for me. God help me the day I figure it all out.

[1]: https://martinfowler.com/bliki/ApplicationBoundary.html

[2]: https://en.wikipedia.org/wiki/Conway%27s_law

goto11
Are you applying the principle of charity when reading Fowler? The quote about performance seem totally sensible and basically akin to Knuths "We should forget about small efficiencies, say about 97% of the time...". Focus on maintainability and only sacrifice clarity for performance when you know there is a performance problem. What part of this do you honestly disagree with?

You can pick any writing apart by being deliberately obtuse. It doesn't help the conversation. E.g rhetorically asking the meaning of "clear" and "well-factored" code, when basically all of his writing are about exactly that.

I'm sure some people are taking him too much as gospel and don't look further - since this happen to any sufficiently popular writer. Just look at all the people on HN which read PG's essays as gospel and think all problems of architecture and maintainability can be solved by using Lisp - you can't really blame PG for that either.

Yes you will have to apply you own experience and critical thinking and understanding of the problem domain. This is true for everything you read or hear.

camgunz
> Are you applying the principle of charity...?

Well, when I read that passage, I tried to figure out what he might mean. And mostly I distilled it down to what I wrote: "clarity is more important than performance, and will usually lead to better performance". So even though he's lacking a lot of specifics, sure I think I get his meaning.

> "That's basically exactly what Knuth said"

No Knuth said "ignore small inefficiencies". Fowler said "clarity via refactoring is more important than performance, and will usually lead to better performance". Those are very different.

> Focus on maintainability and only sacrifice clarity for performance when you know there is a performance problem. What part of this do you honestly disagree with?

A good and well known example are entity systems in game engines. If you build such a system using OO principles--polymorphism, encapsulation, messaging--you will have an entity system that doesn't support very many entities on common hardware. It will also have very complex and hard to predict behavior as entities are created and destroyed, and as they all react to different messages, even if you break down and do it all synchronously.

Unless you have a lot of certainty that any game built with that engine will have a (far) below average entity need, you'll end up building the wrong thing, even following best practices, and no amount of refactoring will help you; it's a fundamental design limitation. In fact, you have to design a system that breaks these rules in order to get good performance, and I'm not talking about wonky code here and there; I'm talking about building something that (for example) lets you avoid vtables entirely.

So that's an example of why I disagree with Fowler on refactoring and performance. I think he's basically as wrong as you can be about it. It's a complex subject, and no, it doesn't just fall out of "clear" code, whatever that is.

> You can pick any writing apart by being deliberately obtuse. It doesn't help the conversation. E.g rhetorically asking the meaning of "clear" and "well-factored" code, when basically all of his writing are about exactly that.

I can understand how I probably came across as overly pedantic, but I don't know any other way to pin down all the vagueness. A lot of people find Haskell very clear and idiomatic C unreadable, or a lot of people can read fucked up SQL but can't make heads or tails out of ES6. If I'm gonna burn engineering time on making something "clearer", I'd like some idea of what that is.

And my strong feeling is that it's very situational, contextual, and subjective. What's clear in pixman is confounding in Django; what's elegant in Java is weird as hell in Python. I'd love to read a book that explored these differences and gave me more insight into how to express myself more clearly across them. Refactoring is just not that book at all, and in fact it encourages lots of engineers to apply its concepts everywhere, which of course makes things less idiomatic, less context-aware, and thus, ironically, less clear.

goto11
> clarity is more important than performance,

But he does not say that! You are deliberately misrepresenting the meaning of the quote. He says he focus on clarity first and only optimize for performance if absolutely necessary. You can disagree with that, but it is a totally different thing.

If you have legitimate disagreements which his viewpoints you should be able to state your argument without needing to misrepresent what he says. Then we can have a conversation and maybe learn something.

camgunz
I mean I quoted the entire 3 paragraphs up there, so of course I'm paraphrasing.

Fowler advocates strongly for OOP. Presumably he'd find an OO entity system clearer (if you're gonna quibble with this, imagine how much more productive this conversation would be if we had an understanding of what "clearer" meant, then recoil in horror as you realize thousands of developers are having this exact same unproductive discussion, all because one guy said he had the answers but didn't really give specifics, like Fermat's final theorem but for devs), but the fact remains that if you built a modern entity system this way you'd have to rewrite it to get even average performance out of it. No amount of refactoring will fix the base design.

So I feel like I have disagreed directly with his point, and I'm happy to discuss further. I do, weirdly, feel a little like you're not applying the principle of charity to me though. So let's both endeavor to be a little more understanding.

goto11
> I do, weirdly, feel a little like you're not applying the principle of charity to me though.

Fair enough, we can all be guilty of that. What part of your argument have I misunderstood or misrepresented?

camgunz
I think I've pretty directly disagreed with:

> focus on clarity first and only optimize for performance if absolutely necessary

I think that in a lot of very important, very common use cases (take your pick: game engines, embedded programming, OS development, language runtimes, linear algebra libraries, etc. etc. etc.) this is extremely bad advice. It's... probably good advice for enterprise applications; but that's a pretty small subset of all software. I think that if you start with clarity here and do performance optimization after the fact you will build something that can't be used as intended--almost always (of course, not "alwaysly", see Python).

More broadly, I think this is a far too narrow way of looking at "clarity" vs. "performance". I have some idea of what I think clarity is, and I have some idea of what I think performance is, and they're both pretty nuanced and deep topics. I think they're two big bags of variables in an even larger bag of engineering variables, and I think Fowler does us all a huge disservice when he skips that entire discussion with what is your synopsis up top there. Maybe you're writing Python and you need a certain level of dynamic nature in order to reach your desired level of flexibility and clarity. That can be fine! The point here is that these are rich topics, and they deserve better than what I see as a pretty blithe dismissal.

None
None
> "The mess we're in" famous talk by joe amstrong

For anyone else: https://www.youtube.com/watch?v=lKXe3HUG2l4

Fantastic watch, thanks for recommending. Sad to hear that Joe Armstrong passed away a few weeks ago.

May 09, 2019 · mooreed on Joe Armstrong Obituary
I wished I had followed Joe's career closer while he was alive.

I first encountered Joe in this video - a wonderfully funny account of modern computing (from 2014)

https://www.youtube.com/watch?v=lKXe3HUG2l4

Apr 20, 2019 · zaph0d_ on Joe Armstrong has died
This is absolutely devastating! His talk "The Mess We're In" was one of those talks which were incredibly funny and informative at the same time. I absolutely lost it, when he told the story of the single comment his coworker put into the Erlang code.[1]

Rest in peace!

[1] https://youtu.be/lKXe3HUG2l4?t=630

Apr 20, 2019 · feniv on Joe Armstrong has died
I've only recently started learning Erlang (via Elixir), but I'm absolutely amazed by the underlying technology and the brilliant minds behind it.

I'll remember Joe by the several insightful, entertaining talks he's given in recent years. https://www.youtube.com/watch?v=lKXe3HUG2l4 in particular.

Apr 20, 2019 · tosh on Joe Armstrong has died
Just re-watched his Strangeloop talk: “The mess we’re in”. Still so spot on. He will be dearly missed.

https://youtube.com/watch?v=lKXe3HUG2l4

blacksqr
"And then we've got this sort of dichotomy between efficiency and clarity. You know, to make something clearer, you add a layer of abstraction and to make it more efficient you remove a layer of abstraction. So go for the clarity bit. Wait ten years and it will be a thousand times faster, you want it a million times faster, wait 20 years."
solipsism
That's... one way to make something clearer. It's also a way to hide complexity behind leaky facades. "Just add a layer of abstraction" is horrible advice.
tigershark
Thanks a lot, I never watched this, it’s absolutely awesome. I laughed a lot and I actually cried in the slide about legacy code when the first line was “programmers who wrote the code are dead” :(
a_c
Indeed. So much substance in such a short time.

- clarity vs efficuency on abstraction

- the need to unwind (not sure if it is the right word) entropy

- naming and comments

- etc

Things were not so obvious to me when just started my career but are so important to software engineer's day to day

RIP Joe. Your ideas will live with us, forever

javajosh
It is a great talk. Based on that I took his course on Erlang at FutureLearn, which was also very good. Sadly, when I sent that link to some colleagues at work they shrugged and have since kept adding more and more dependencies and complexity to the front-end build. It's very hard to stop momentum once it's got going, culturally.
nickjj
I just watched that for the first time the other day. Such a good talk.
HenryBemis
Now that is a beautifl mind!!
octosphere
Always liked his reference to GruntJS[0]

[0] https://gruntjs.com/

solipsism
Guess I'm alone in not seeing what's so good about this talk. He only presents problems, and in an extremely disorganized way. The closest thing to a solution is "wait, hardware advancement will eventually make your slow code fast."

It's definitely entertaining, but that's about all I can give it.

phlakaton
The idea of "no copies – everything exists in one place and has a unique ID" was new to me. I still don't know if it's a good idea, let alone practicable, but it's great food for thought!
mercer
There's a time and place for things. Posting multiple critical comments in a commemoration thread is maybe not the best time or place, and better kept for later?
solipsism
The post is about the talk, not the man. The comment is about the talk, not the man.

I kind of doubt Joe Armstrong would support N days of mourning requiring uncritical acceptance of everything he's ever done.

Joe called things like he saw them. I think we better honor him by doing the same.

fjni
“It said I didn’t have grunt installed. So I googled a bit, and I found out what grunt was. Grunt is ... I still don’t really know what it is.”

Delivered that line with perfect timing and humility. What a great sense of humor!

Apr 20, 2019 · dpeck on Joe Armstrong has died
That is so sad, he was in his late 60s and it seemed like he had a lot of life left in him, talking about how software can get better and having a great (and sometimes snarky) outlook on the profession.

Highly recommend his thesis (2003) and a few of his great interviews/presentations for anyone who isn’t familiar with Joe, it captures a lot of what he thought about and pushed for in his professional life.

http://erlang.org/download/armstrong_thesis_2003.pdf https://youtu.be/fhOHn9TClXY https://youtu.be/lKXe3HUG2l4 https://youtu.be/rmueBVrLKcY

I hope his family and friends can find some comfort in how much he was appreciated and admired in the development community.

nextos
I came here to say the same thing. His thesis is extremely readable and illuminating on the topic of reliable distributed systems.
jacquesm
It goes much further than that, it shows how to tackle reliability even in systems that are not distributed. The primary insight is that all software will be buggy so you need to bake reliability in from day one by assuming your work product will contain faults.
nextos
Yes, I know. Erlang was not distributed till 1991, roughly 5 years after it was born.

It's also really illuminating how they implemented the first versions of Erlang as a reified Prolog [1]. But that is not explained in the thesis, just in his 1992 paper which he briefly cites.

[1] https://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=F4...

pklee
I came in thinking.. god it better not be him.. it better not be him.. so sad to hear this.. I loved his book on erlang. Just an amazing mind
masklinn
> That is so sad, he seemed like he had a lot of life left in him talking about how software can get better and having a great (and sometimes snarky) outlook on the profession.

I'm in actual shock, he was tweeting about pretty much that (also brexit and playing with his phone's voice recognition) just 2 weeks ago… He wasn't even 70…

pera
Yeah me too, he was also very active in the Elixir forum. RIP
mercer
He was even made admin just over a week ago :-/.

https://elixirforum.com/t/introducing-our-new-moderators-and...

Apr 20, 2019 · 99 points, 2 comments · submitted by tosh
metastew
Found a transcript for this talk, if anyone's interested: https://raw.githubusercontent.com/strangeloop/StrangeLoop201...
greenyoda
For those who haven't read it yet: Joe Armstrong died today: https://news.ycombinator.com/item?id=19706514
> reminding us that real life is impossible to cleanly codify into a computer model.

Also see Joe Armstrong's talk, The Mess We Are In: https://www.youtube.com/watch?v=lKXe3HUG2l4

For a (not lightning) talk on the subject, I recommend "The Mess We're In" by Joe Armstrong: https://www.youtube.com/watch?v=lKXe3HUG2l4
Oct 31, 2018 · 2 points, 1 comments · submitted by tosh
tosh
One of the best talks on software engineering I've seen so far.
By now finding great names that aren't in use yet (let alone not in use in a similar context) is almost impossible. Fast fwd 100 years from now & we'll have way more artifacts with names. I think we'll be fine.

Reminds me a bit of Joe Armstrong's StrangeLoop talk from 2014: The Mess We're In https://www.youtube.com/watch?v=lKXe3HUG2l4

I'd say your comment is not constructive: you're asking me to name all the solutions, otherwise my observation has no merit?

I'll expand a little. Let's take a single example: developing a single page web application.

To know how to do that, I need to learn HTML, CSS, JS Frameworks, web pack or whatever. Linters, preprocessors, package management ....

Contrast that to building flash app where a designer could use a GUI to produce a beautiful interactive app in no time at all.

Yep, we all hated Flash, but in terms of accessibility to creators, it was beautiful compared to the mess we've made.

A little off-topic but good for perspective: https://www.youtube.com/watch?v=lKXe3HUG2l4

randomsearch
Also, pretty much anything Bret Victor has done.

e.g. https://www.youtube.com/watch?v=8pTEmbeENF4&t=11s

lkschubert8
Aren't you then asking for a WYSIWYG editor or a CMS that includes something like that? Many of those exist and are suitable for a lot of web development. You step into the lower level for customization that is difficult or just flat out not implemented in the higher level tooling.
Jul 27, 2018 · 1 points, 0 comments · submitted by tosh
Jun 20, 2018 · 2 points, 0 comments · submitted by tosh
Madness! Or actually really similar to an idea that Joe Armstrong presented in Strange Loop one time (just the idea no demo or anything)

https://www.youtube.com/watch?v=lKXe3HUG2l4

ZephyrP
There exists an Erlang unikernel (of sorts): http://erlangonxen.org/
namibj
Well yeah, but that's like saying there exists hardware that eats java bytecode. Yes, it does, but it brings _large_ restrictions with it.
The Mess We're In - Joe Armstrong: https://www.youtube.com/watch?v=lKXe3HUG2l4
stepvhen
i rewatch this one now and then just to hear Joe Armstrong speak of another author's complex compiler code and the singular comment " and now for the tricky bit"
Joe Armstrong gives a very good talk about ideas in the article in "The Mess We're In" (2014):

https://youtu.be/lKXe3HUG2l4?t=26m12s

Sep 19, 2017 · 3 points, 0 comments · submitted by bra-ket
In a similar vein like the first one, maybe, but with the addition of some physicist's humor if you are in into that kind of thing: https://www.youtube.com/watch?v=lKXe3HUG2l4
"The Mess We're In" by Joe Armstrong: https://www.youtube.com/watch?v=lKXe3HUG2l4

"Normal Considered Harmful" by Alan Kay: https://www.youtube.com/watch?v=FvmTSpJU-Xc

prolog does this too, but it's not enough. if you don't have enough data, you could find a match that will fail in the future.

To answer OP's question, the reason people rewrite is that it's "faster" to write a new one than find what's out there

OR

I don't want an entire house when all I want is a faucet. Today if you want a faucet. you might have to do something such as house = new House(); faucet = house.getFaucet(); So you have to tear apart/copy and paste the code, if it's loosely coupled enough, you find yourself dealing with all the dependencies.

This is a question lots of people have been asking tho, checkout this talk from the creator of Erlang asking the same question https://www.youtube.com/watch?v=lKXe3HUG2l4

Apsion
The house analogy makes sense - I need to research Smalltalk and Prolog Checking out the video - thanks
> Logic systems don't have "And nothing else will ever be true!".

Uuh. Closed world assumption? Everything that is not true is false. Most logic systems do have this. Prologs (not a logic system I know) cut operator even turns this statement into an operation.

I feel like Rich really gets it wrong this time. His request to support all the abominations that you have ever written and keep them compatible with recent changes, might work if you have people pay for using your library and a company behind it. But doesn't fly if you maintain these things out of goodwill in your anyhow limited spare time.

The best example of this style going horribly wrong are the linux kernel file system modules. Different api versions all in use at the same time by the same code with no clear documentation on what to use when.

It's also ironic that the examples he uses to make his point namely, Unix APIs, Java and HTML, are horrible to work with especially because they either never developed good API's (looking at you unix poll), or they, like browsers, have become so bloated that nobody want to touch them with a ten foot pole. One of the reasons why it takes so long for browser standards to be adopted is that they have to be integrated with all the cruft that is accumulating there for almost three decades now.

"Man browsers are so reliable and bug free and it's great that the new standards like flexbox get widespread adoption quickly, but I just wish the website I made for my dog in 1992 was supported better." -no one ever.

Combinatorial complexity is not your friend.

I'd rather have people throw away stuff in a new major release, maybe give me some time to update like sqlite or python do, and then have me migrate to a new system where they have less maintenance cost and I profit from more consistency and reliability.

I think that Joe Armstrong has a better take on this. https://www.youtube.com/watch?v=lKXe3HUG2l4

Also even though I'm a fulltime Clojure dev, I would take Elms semantic versioning that is guaranteed through static type analysis anytime over spec's "we probably grew it in a consistent way" handwaving.

joe-user
> His request to support all the abominations that you have ever written and keep them compatible with recent changes, might work if you have people pay for using your library and a company behind it. But doesn't fly if you maintain these things out of goodwill in your anyhow limited spare time.

It might actually be less effort to follow his method. You may need to create a new namespace or create a new function, but then you don't need to put out breaking change notices, handle user issues due to breaking changes, etc.

> "Man browsers are so reliable and bug free and it's great that the new standards like flexbox get widespread adoption quickly, but I just wish the website I made for my dog in 1992 was supported better." -no one ever.

It's not about better support for your 1992 website, it's about it still being accessible at all. Perhaps you've never had to deal with a file in a legacy format (ahem, Word) that had value to you, but was unrecognized in newer versions of software, but I can assure you that it's thoroughly frustrating.

> Also even though I'm a fulltime Clojure dev, I would take Elms semantic versioning that is guaranteed through static type analysis anytime over spec's "we probably grew it in a consistent way" handwaving.

An Elm function `foo :: int -> int` that used to increment and now acts as the identity function is merely statically-typed "we probably grew it in a consistent way" hand-waving, which may be worse than the alternative given the amount of trust people put into types.

pron
> But doesn't fly if you maintain these things out of goodwill in your anyhow limited spare time.

What percentage of the total software produced and used fits that and should discussions of general good practices address this anomaly? Obviously, if you work outside the normal industry practices, then industry best-practices don't apply to you. I don't think you should take what he says too literally to mean that every line of code you should write must obey this and there are no exceptions.

> that nobody want to touch them with a ten foot pole.

If by nobody you mean almost everyone. Those happen to be the most successful software platforms in history, and most new software outside embedded (with the big exception of Windows technologies, maybe) uses at least one of those platforms to this day.

> One of the reasons why it takes so long for browser standards to be adopted is that they have to be integrated with all the cruft that is accumulating there for almost three decades now.

So what? Look, large software gets rewritten or replaced (i.e. people will use something else) every 20 years on average. If your tools and practices are not meant to be maintained for 20 years, then one of the following must be true 1. they are not meant for large, serious software, 2. you are not aware of the realities of software production and use, or 3. you are aware, but believe that, unlike many who tried before, you will actually succeed in changing them. Given the current reality of software, backward compatibility is just more important to more people than agility.

> Combinatorial complexity is not your friend.

Every additional requirement results in increased the complexity, and backward compatibility is just one more, one that seems necessary for really successful software (especially on the server side).

> "we probably grew it in a consistent way" handwaving.

Why do you think it's handwaving? The spec is the spec, and if you conform with the spec then that's exactly what you want.

j-pb
When stating that nobody wants to touch browsers with a ten foot pole, I meant that nobody wants to contribute to browser development unless they are paid very very well.
freshhawk
> His request to support all the abominations that you have ever written and keep them compatible with recent changes, might work if you have people pay for using your library and a company behind it

This is generally his focus, on big professional software. I'm also a Clojure dev and I'm on the other side of the fence on this one as well so sometimes I'm disappointed in the enterprise focus but I knew what I was getting into and it is still worth it. Am I crazy to think that maybe other lisps would have done better if they had demanded less purity?

Same with the examples of Unix APIs, Java and HTML. Sure, they are all bloated and horrible to work with. They are also massively, insanely successful. I think they are great examples because at that scale it's impressive that they work at all.

This is part of the pragmatism that makes Clojure great, they generally stay away from trying to solve the problem of huge projects being unwieldy and ugly and painful and instead they accept it as a given and work on tools to mitigate the problem. For a lot of people backwards compatibility isn't a choice, it's a requirement set in stone. Even though it always causes everyone to pull their hair out in frustration.

One day maybe one of these other research languages or experiments will find an answer to this, and prove that it works at scale. I will celebrate more than most.

kazinator
Almost everything is mutable in Common Lisp: lexical variables, objects, global function bindings. Code compiles to native and you can deliver a single binary executable with no dependencies.
junke
> Am I crazy to think that maybe other lisps would have done better if they had demanded less purity?

People associate Common Lisp with many different things, but not purity.

DigitalJack
The primary advantage of clojure to my mind are the lovely data structures and the interaction with them.

Common Lisp and scheme can implementation them, and through reader macros interact in probably the same ways, but it would always be second or third class.

Second big deal for clojure is the ecosystem.

I'd love a native language like clojure that was driven by a language specification.

  How To Design A Good API and Why it Matters [0]
  The Principles of Clean Architecture [1]
  The State of the Art in Microservices by Adrian Cockcroft [2]
  "The Mess We're In" by Joe Armstrong [3]
[0] https://www.youtube.com/watch?v=aAb7hSCtvGw

[1] https://www.youtube.com/watch?v=o_TH-Y78tt4

[2] https://www.youtube.com/watch?v=pwpxq9-uw_0

[3] https://www.youtube.com/watch?v=lKXe3HUG2l4

zerognowl
+1 for Joe Armstrong's talk. Very funny, but uncomfortably true
panic
On the subject of API design, this talk is also quite good: https://www.youtube.com/watch?v=ZQ5_u8Lgvyk
Aug 15, 2016 · 2 points, 0 comments · submitted by adamnemecek
Aug 15, 2016 · 2 points, 0 comments · submitted by dkarapetyan
Looks like it has gotten a lot better than "Now for the tricky bit" days.

See https://youtu.be/lKXe3HUG2l4?t=640

Same here. As others have indicated, that would be more consistent with the title (and more interesting).

The more general idea is a content-addressable function repository, where, as you point out, code would have to be in some kind of normal form. Joe Armstrong toys with this idea in his talk "The mess we're in," one of my favorites. [0]

[0] https://www.youtube.com/watch?v=lKXe3HUG2l4&t=33m10s

jplur
Strongly agree! One of the reasons I'm excited about IPFS is for content addressable code linking. For example, running single js functions through google's closure compiler to normalize the symbols and using a package manager that would recursively replace IPFS links with the code.
genericpseudo
My semantic-web-loving cold dead heart disagrees with you on the "more interesting".

The one really good idea in the whole Semantic Web train-wreck (and at this distance, I think it's fair to call it that) was that everything should have a negotiable, dereferencable URL. REST includes that core principle.

A lot of single-page web apps make me sad because they've been designed without reference to that. If what you're building is genuinely an application, then I get it; but most of the time what you're building is a catalogue, and everything in that can have an address, so it should.

gavinpc
I agree 100% about catalogues and addressability. I'm also a diehard in that respect, and I've come to distinguish "apps" in a similar manner, as sites where either there's no "business case" for an ontology, or the content is too transient for it to matter.

Naming is hard... I think that's what makes the "semantic web" a kind of chimera even for those who endorse it. Can a URL really capture the worldwide identity of a thing? Over all time? And who's going to maintain all those names?

Suppose that naming everything is intractable as a human effort, but we still want addressability. Alan Kay takes this to the extreme, saying, why not give every object on the internet an IP address? [0] Not every resource, every object, in every program. It sounds facetious, but it's consistent with his general objects-as-computers-all-the-way-down view. His system designs (including those from VPRI) express the belief that hard barriers between the layers of a system (usage and application, application and framework, framework and OS) account for much of today's uncontrollable code bloat and the limits on how much scale systems can tolerate. The "everything gets an IP address" idea is just a recognition that network boundaries will eventually be seen the same way. From this perspective, it might be fruitful to think about how we'd identify things on the internet if they were homogenous with the objects in our applications.

[0] It's in one of his talks but I don't remember which.

Apr 28, 2016 · killercup on Introducing MIR
> The reason that this could be really cool is that you could hypothetically hash these MIRs and store these hashes in a public database of open-source code.

Sounds similar to what Joe Armstrong described at the end of "The Mess We're In" (StrangeLoop 2014): https://www.youtube.com/watch?v=lKXe3HUG2l4

the mess we're in with joe armstrong: https://www.youtube.com/watch?v=lKXe3HUG2l4
I discovered his Strange Loop talk a few months ago. Funny guy! (He is one of the creators of Erlang.)

The part where he talks about "the entropy reverser" is ~32 minutes in [1], related to the article linked by parent.

[1] "The Mess We're In" by Joe Armstrong (September, 2014) https://youtu.be/lKXe3HUG2l4?t=32m5s

Jan 12, 2016 · 1 points, 0 comments · submitted by joeblau
Oct 11, 2015 · 4 points, 0 comments · submitted by grey-area
Jul 20, 2015 · kylebrown on Do we need browsers?
Javascript package managers like npm tend to work much more seamlessly than the old unix ones. Wouldnt mind scrapping them all for https://node-os.com/

Also, the reason package managers suck is because.. entropy. See Joe Armstrong, The Mess We're In http://www.youtube.com/watch?v=lKXe3HUG2l4

mbrock
I find that the Nix package manager is much more principled, general, and promising as a model for robust and easy package managing. And with NixOS, it's a huge step towards a declarative and reversible way of managing a whole computer system.
BTW the author is Joe Armstrong -- Erlang's inventor.

The link to the The Mess We Are In he refers to ( https://www.youtube.com/watch?v=lKXe3HUG2l4 ) is a fun and accessible talk to gave at Strange Loop conference last year.

Lets not forget that the usage of javascript correlates with need to target the omnipresence of browser and not for merits of javascripts idioms or design pattern paradigms.

Javascript is a great hacker language. All the improvements as stated above will extend the capacity of js to quickly hack up mvp apps. With that said I think the argument can be made that javascript was never designed from the ground up to support this new omnipresent realtime persistent application platform.

For me the question remains to be seen answered if a server that was designed the ground up to be a server such as netty offers a more stable solution then node.js

I think some might say that ES6 is like putting lip stick on a pig therefore patching up JS to keep evolving as more expressive language.

Every time I run NPM install I wonder what kinds of incidental complexities I am I getting myself into. Then dred if I have learn yet another build system. Every time I run NPM I think of an old Jim Breuer joke [1] -- NPM is the "tequila" in the JS party of complexity. ASM.js is amazing, but are we getting a little too drunk on our technologies. ASM.js just might be amazing koolaid when it comes to the sobering decision to build something from "the feet up" to serve on purpose and serve it well. JS can be the jack of all trades and master of none and that can be a bad thing. One thing is for sure JS is really fun, but I would not go making a stock market exchange out a javascript codebase.

Speed is not the only performance metric out there. Think stability is often overlooked for having more power over more control. The Influenza in JS world is not spaghetti code but the more harder to grok house of cards library dependencies. This is just my amateur opinion on observations as go further down the rabbit hole of my learning to code journey.

TL;DR JS is a great hacker language but drink responsible when it comes to importing incidental complexity of additional libraries.

[1] Jim Breuer Just For Laughs http://www.youtube.com/watch?v=z8dvpsVEJEQ [2] "The Mess We're In" by Joe Armstrong http://www.youtube.com/watch?v=lKXe3HUG2l4

I saw Joe's strange loop talk [1] a while ago and I get the same vibe reading his post as I did when watching the video. It sounds very cool, but I can't shake the feeling that it only works for 85% of the code. That is to say if you program in exactly the right way, you will be able to do everything you want and it will work with this system, but there are ways of programming that won't work with this system.

More specifically I feel like there are two problems. 1) It feels suspiciously like there's a combination of halting problem and diagonalisation that shows there are an uncountably infinite number of functions that we want to write that can't be named (although I would want to have a better idea of how this is supposed to work before I try to hammer out a proof). 2) I don't understand how it's possible for any hashing scheme to encode necessary properties of a function such that the function with necessary properties has a different hash than an otherwise identical function without these properties. For example can we hash these functions such that stable sort looks different than unstable sort? Wouldn't we need dependent typing to encode all required properties? And if that's the case couldn't I pull a Gödel and show that there's always one more property not encodable in your system?

[1] - https://www.youtube.com/watch?v=lKXe3HUG2l4 [2]

[2] - https://news.ycombinator.com/item?id=8572920 (thanks for the link)

nowne
There are countably infinite number of functions. A simple proof is that each function can be represented as a string, and there are countably infinite number of strings for a finite alphabet. You could also argue that functions are equivalent to Turing machines, and there are a finite number of Turing machines.
I think Mr. Armstrong would approve, given his comments near the end of https://www.youtube.com/watch?v=lKXe3HUG2l4, where he opines that the web would be great if, instead of URLs, every published document were just named with a hash of its content.
esfandia
We have a P2P file-sharing program that does this, called U-P2P (http://u-p2p.sf.net). Content is hashed, and you use a Gnutella search using the hash to retrieve it. Documents are organized by what we call "communities", which themselves are represented by a document and its corresponding hash. So the document name is really made up of two hashes: the one of the community it belongs to, and its own hash. You can use these hashes as hyperlinks, and U-P2P resolves it via search, as previously mentioned.

What we think is great about it is that the hash is location-independent. There could be multiple copies of the document at various locations at any given point. As long as there is at least one copy and that it is reachable via search, it will be retrieved.

We also built a distributed Wiki based on that idea and platform, called P2Pedia (http://p2pedia.sf.net).

It's all very much an academic research project, so don't expect a beautiful interface or easy-to-install packaging or anything, but I think it's a good proof of concept.

(note to self: we should really move these to GitHub).

chenglou
In React.js, you can serialize your whole app state through a simple ˋJSON.stringify` and base64 encode that into the url. The nice property of that is that you get to pass that url around to friends, and when they click on it they'll go to the page, which decodes and deserializes the url and reproduce the exact app state, down to the letters in the input boxes.

Effectively, this gives you "program as a value" where the same url means the same program. Immutable programs basically.

I've tried this and the current downside is that it looks extremely ugly when you try to share a link lol. But this should be circumventable. The other downside is that this is a bit theoretical still. You'll have to exclude sensitive information such as password. Sometimes stuff are in a closure rather than in your ˋstate`.

dyadic
And the other other downside would be when your app becomes big enough to not fit in an url
rictic
Emerging standard in that area, subresource integrity: http://w3c.github.io/webappsec/specs/subresourceintegrity/

It's initially just doing the simplest possible thing (making the resource unavailable unless its hash is valid) but semantically it will probably be allowed for the browser to resolve the resource using other methods (e.g. if it already has that resource cached from another URL) so long as the hash matches.

jacquesm
So we could simply set up a url shortening service that published such hashes. Unfortunately with the 'dynamic' nature of web pages these days that's going to be hard to go back to. It may be an interesting way to re-boot the web though. 'regular' Urls are then merely a DNS like layer on top of a content hashing scheme.
pyre
> every published document were just named with a hash of its content.

I see too many issues with this (for example):

- I publish a news article. I publish a retraction/update to said article. Now the article has a new hash. Does the old hash give you the old version of the article, or redirect you to the new version?

- How do we define 'document?' If we define it as the complete HTML page served up to the browser, then changes to the design of the site would invalidate all previous hashes. Pointing old hashes to new hashes is work, which will not always be done (leading to the same situation we have with site redesigns breaking old URLs).

endergen
Exactly, you should still be able to have references to persistent identities. Much like the semantics of clojure which has a distinction between values and references to identities like vars/agents etc.

These URLs would be clearly marked of course.

andrewflnr
Why not just have all URLs be mutable aliases for hashes?
esfandia
Why not keep the old document and let the new version (child) refer to the old one (parent)? You then "just" need a refresh feature that can retrieve newer versions of the document for you. In our P2Pedia system (I referred to it in a sibling post earlier) you can go from the parent to its children via search.
None
None
rictic
http://ipfs.io lets you to reference content in one of two ways. Either an immutable hash of the content or a reference to the public key that's allowed to publish / update an immutable hash of the content. Seems like a pretty good compromise.
Oct 22, 2014 · 6 points, 0 comments · submitted by dkarapetyan
Sep 19, 2014 · 157 points, 77 comments · submitted by sgrove
zaroth
"In the middle of the pattern matching algorithm, there was this single comment that read # and now for the tricky bit." (~10:40)

At ~28:00 is he saying that the optimal computing device could operate 27 orders of magnitude more efficiently that what we use today?

At ~36:00, finally something to make me sit up. "content addressable store".

everettForth
Thermodynamically speaking, a perfectly reversible computer which erases no bits (and creates no entropy) approaches zero watts per operation to run.

Kurzweil has pointed out that one can think of the 10^15 state changes per second going on inside a 1Kg rock with no outside energy input as a computation device.

Ok, so at 28 minutes, he's referring to some quantum mechanical lower bound to change a bit. I imagine the rock is using that really tiny amount of energy from the outside environment.

signa11
> At ~36:00, finally something to make me sit up. "content addressable store".

"content addressable store" or cam (content-addressable-memory) is pretty well-known in cpu-arch, n/w equipment etc. domains. hashtables are s/w counterparts :)

xpe
He mentions these topics with regards to distributed hash tables:

  * https://en.wikipedia.org/wiki/Chord_(peer-to-peer)
  * https://en.wikipedia.org/wiki/Kademlia
rdtsc
Great talk. Very light-hearted and entertaining.

Always impressed by Joe. Programming since the 60s and still programming, still writing, giving talks. He is a great role model. I wish I would be programming and be just as excited about it when I am at his age.

jacquesm
I wished I would be half that excited about it at my present age!
dmourati
I like his speaking style and appreciate the intro to Kademlia and Chord. This page has some visual representations: http://tutorials.jenkov.com/p2p/peer-routing-table.html
lifeisstillgood
I am lucky enough to work in a "internal open source environment" - I can and do search the whole code base for a major Fortune 500 daily for pieces that fit my needs. And I often find them - but the process of getting it refactored to fit my exact needs (and so improving their code and the overall reduction in entropy) is mostly impossible - because of humans No one is really willing to change someone else's code without talking to them, agreeing, getting past thier "yes I have tests but if you change it then I don't really know ..."

It's a fundamental problem - good well maintained tests help but this is cultural not technical problem.

_prometheus
Joe and I are thinking similarly :D I'm going to dump some ideas here.

---

# JRFC 27 - Hyper Modular Programming System

(moving it over to https://github.com/jbenet/random-ideas/issues/27)

Over the last six months I've crossed that dark threshold where the desire of building a programming language has become an appealing idea. Terrifyingly, I _might_ actually build this some day. Well, not a language, a programming _system_. The arguments behind its design are long and will be written up some day, but for now I'll just dump the central tenet and core ideas here.

## > Hyper Modularity - write symbols once

### An Illustration

You open your editor and begin to write a function. The first thing you do is write the ([mandatory](https://code.google.com/p/go-wiki/wiki/CodeReviewComments#Do...) [doc comment](http://golang.org/doc/effective_go.html#commentary) describing what it does, and the type signature (yes, static typing). As you write, [your editor suggests](http://npmsearch.com/?q=factorial) lists of functions [published to the web](npmjs.org) (public or private) that match what you're typing. One appears promising, you inspect it. The editor loads the code. If it is exactly what you were going to write. You select it, and you're done.

If no result fits what you want, you continue to write the function implementation. You decompose the problem as much as possible, each time attempting to reuse existing functions. When done, you save it. The editor/compiler/system parses the text, analyzes + compresses the resulting ASG to try to find "the one way" to write the function. This representation is then content addressed, and a module consisting of the compresses representation, the source, and function metadata (doc string, author, version, etc) is published to the [(permanent) web](http://ipfs.io/), for everyone else to use.

### Important Ideas

- exporting a symbol (functions, classes, constants, ...) is the unit of modularity (node)

- system centered around writing _functions_ and writing them once (node)

- stress interfaces, decomposition, abstraction, types (haskell)

- use doc string + function signatures to suggest already published implementations (node, Go)

- content address functions based on compressed representations

- track version history of functions + dependents (in case there are bug fixes, etc). (node)

- if a function has a bug, can crawl the code importing it and notify dependents of bugfix. (node, Go)

- use static analysis to infer version numbers: `<interface>.<implementation>` (semver inspired)

- when importing, you always bind to a version, but can choose to bind to `<interface>/<implementation>` or just `<interface>`

- e.g. `factorial = import QmZGhvJiYdp9Q/QmZGhvJiYdp9Q` (though editors can soften the ugly hashes) (node + ipfs)

- all modules (functions) are written and published to the (permanent) web (public or private)

- when importing a function, you import using its content address, and bind it to an explicit local name (`foo = import <function path>` type of thing)

- the registry of all functions is mounted locally and accessible in the filesystem (ipfs style)

- _hyper modular_ means both to "very modular" and "modules are linked and on the web"

Note: this system is not about the language, it is about the machinery and process around producing, publishing, finding, reusing, running, testing, maintaining, auditing, bugfixing, republishing, and understanding code. (It's more about _the process of programming_, than _expressing programs_). This means that the system only expresses constraints on language properties, and might work with modified versions of existing languages.

akkartik
"The first thing you do is write the (mandatory) doc comment describing what it does, and the type signature. As you write, your editor suggests lists of functions published to the web that match what you're typing."

As a concrete example, if I say (picking a syntax at random; underscore denotes where my cursor is):

  add :: int -> int -> int
  def add(a, b)
    """add two numbers"""
    _
and if the permanent web contains two functions:

  proj1_add :: int -> int -> int
  def proj1_add(a, b)
    """returns sum of arguments"""
    a+b

  proj2_sub :: int -> int -> int
  def proj2_sub(a, b)
    """returns difference between a and b"""
    a-b
Would you envision your system "matching" both or just the first? If just the first, on what basis do you imagine figuring this out?
_prometheus
Just the first. on the strings "sum", "add", and potentially on the symbolic operation `args[0]+args[1]`.
lifeisstillgood
Yes. And Yes. I doubt that our conception of these things will be in use in twenty years - but I do think that assisting the process of taking an idea and putting into production (process of programming) is going to be more and more a feature of our world.

It allows for bringing up the worst ten percent of code without limiting the top ten. It is far far better than the Java idea of making it hard to shoot your foot off with the language.

fizixer
Interesting. I had an idea similar to the first part of your comment, in which 'reinventing-the-wheel', everytime you start to write a new program, is avoided by 'autocompleting your code' based on the vast database of open source projects on the internet. If a programmer starts writing a code that declares some variables and opens a for loop, the smart editor starts searching the open source projects and lists the code chunks with beginnings most resembling what you've typed so far, and then you can pick one of those if you like, or keep writing. I haven't thought of what could be done afterwards though.

I tend to think it also has connections with a recent discussion here: https://news.ycombinator.com/item?id=8308881

thallukrish
It is a very interesting talk. Though it might take a while for some one to connect the dots Joe has sprayed all over.

Somewhat related to this, I have been annoyed by the way apps scatter information and have been working to find a way of managing the mess.

http://productionjava.blogspot.in/2014/07/the-broken-web.htm...

and

http://productionjava.blogspot.in/2014/08/coding-can-be-puni...

timClicks
This talk attempts to provide a strategy for reducing complexity within software. The whole talk is really valuable, but if you're short of time.. skip to 36:00. In a real hurry? Start at 44:17.
p1mrx
During the last minute of the talk, he says:

"Computers are becoming a big environmental threat. They're using more energy than air traffic."

Is this actually true? Sure, the average person spends a lot more time on a computer than in a plane, but still it seems crazy that they'd be comparable. Or at least the comparison isn't very relevant, because the Internet can significantly reduce people's need to travel.

Turing_Machine
I don't know whether it's true or not, but the computer you're typing on is far from the only one likely you have running. Think of all the microprocessors in TVs, microwave ovens, thermostats, cars (I think most cars have about 30-50 MPUs nowadays)...

Let's ballpark it (fill in better numbers if you have them). Assume that a person takes an average of one 5,000 km trip by plane per year. ATW, a Boeing 737-900ER uses 2.59 liters of fuel per 100 km per seat, so that would be about 130 liters of fuel. Also ATW, Jet A-1 fuel has an energy density of 34.7 MJ/l, so about 4,500 MJ for the trip. A watt is one joule per second, and there are about 31 million seconds in a year, so the continuous power output equivalent for the plane flight would be about 145 watts. So, if all your computers put together are consuming more than ~150 watts, they're consuming more than the equivalent of a 5,000 km plane flight over the course of a year.

This is a very rough estimate, but it doesn't appear unreasonable on its face that his statement could be true.

tcopeland
How much energy is used by inefficient government systems that are running on old minicomputers and such? I bet government data centers have hundreds of old clunkers that could all be run on one modern rack of 1U boxes.
rdtsc
Well modern aircraft are pretty efficient and probably not the top offender.

However he also mentions we probably can't do a lot more with computer to reduce their power consumption.

davidw
> Well modern aircraft are pretty efficient and probably not the top offender.

It depends entirely on how much they get used! An SUV that gets driven once a year contributes less carbon than a Prius that is driven all day every day.

Comparing the aggregate numbers is obviously the only way to compare computers and airplanes anyway.

adamnemecek
"Save the environment, code in C."
corysama
You are getting downvotes, but energy efficient computation has been a large driver in the recent upsurge in interest in C++.
adamnemecek
Wait C++ is has been getting more popular? I'm not complaining (I mostly like C++), I guess I just didn't get the memo.
pjmlp
Lets see.

- Symbiam was coded in C++

- GCC moved to C++

- Clang and LLVM are done in C++

- Mac OS X drivers are written in Embedded C++ subset

- Windows team is moving the kernel code to be C++ compilable

- JPL is moving to C++

- AAA game studios have dropped C around PS3/XBox 360 timeframe

- It is the only common language available in all major mobile OS SDKs

adamnemecek
I mean I was confused by the word 'recent', which to me means the last 2-3 years.
pjmlp
A few items on my list cover your 'recent'.

Decision to move Windows to C++ was announced by Herb Sutter in one of his Microsoft talks and the Windows 8 DDK was the first to support C++ in kernel code.

JPL going C++ was covered at their CppCon 2014 talk.

GCC switched to C++ as implementation language in 2012.

In any case, I would say, all languages with native code compilers can be used in relation to energy efficient computation.

It is just that C and C++ are the only ones on the radar of developers (and managers) looking for mainstream languages with native compilers.

adamnemecek
True. Facebook has also been doubling down on C++ development it seems.
pjmlp
There is a talk from Andrei Alexandrescu at Going Native 2013 where he mentions one of Facebook KPIs is request per watt.
pjmlp
"Save the environment. Make black hat hackers, anti-virus tool vendors jobless. Code in Ada."
johan_larson
And run it on OS/370.
johan_larson
There's a lot to be said for keeping things as simple as possible. Although what qualifies as simple varies from application to application.

I was doing some preliminary analysis for a small project recently, and considering various frameworks and tools. Eventually I realized I could implement what was needed using four JSPs producing static html, with a bit of styling in CSS. No AOP, no injection framework, no JavaScript. And no explicit differentiation between device types.

The resulting application will start up quickly -- which is important when running in a PaaS environment -- and should work on any browser, including weird old stuff like Lynx. Less butterfly. More rat.

dcre
This was definitely not one of the better talks of the conference (though I was embarrassed to say so given the way people [rightly] idolize Armstrong), so I highly recommend checking out the rest: https://www.youtube.com/channel/UC_QIfHvN9auy2CoOdSfMWDw/vid...
garretraziel
While I agree that his talk is a little bit "disconnected", I think that it served its purpose when I am reading some "think big" discussion that is going on here. While his talk doesn't have one clear topic, it does deliver huge and interesting thoughts.
grondilu
I totally agree with the "abolish names and places". Why can't I just write:

    $ cp hash://<somehash> .
and have my computer do whatever it takes to retrieve a file with this hash and copy it on my disk?
wpietri
Because this blocks the very human needs for error-checking and maintaining awareness of context.

It's not like you're going to type that in. You're going to copy and paste it from somewhere. So it's just as good to use

http://releases.ubuntu.com/14.04.1/ubuntu-14.04.1-server-amd...

as

hash://b4ed952f6693c42133f73936abcf86b8

In either case, your computer can do whatever it takes to get the file. With a useful URL, you'll have a reasonable notion about what's coming down and whether it matches your intentions.

Without that, the very natural question is, "Did I get the thing I wanted?" For example, it would be easy to paste the wrong hash code.

There are other benefits, like real-time binding. A hash is going to point to a particular sequence of bits. But you may not want a particular file, but rather the best current mapping from an idea to a file. E.g., if Ubuntu discovers a an issue with their released ISO, they can make a new one and replace what gets served up by the URL.

None
None
lomnakkus
How would you remember the hash? I guess one could have some sort of directory-like system for mapping human-memorable names to hashes...
scoot
And of course it would have to be tree structured to avoid naming collisions and bloat. Oh, wait...
grondilu
> How would you remember the hash?

I wouldn't. I'd make a symbolic link.

Basically the current directory/names structure would be an abstract layer above the hash-based system.

justincormack
Plan 9 already did that in its file system...
oh_sigh
You can.

> aria2c magnet:?xt=urn:btih:1e99d95f....

grondilu
Didn't know about this. Thanks :-)
parasubvert
I'm not sure if this is sarcasm or not. The usability of such an approach is terrible: humans like names, and like hierarchy. This is the same reason we use DNS instead of IP addresses.

There was URN https://en.m.wikipedia.org/wiki/Uniform_Resource_Name many moons ago that is still used. A URN resolver is a software library that could convert that identifier to a URL.

URLs aren't much different from URNs but they actually specified a default resolution algorithm that everyone could fall back on. They were more successful because there was less need to separate identifiers and locators than originally thought, though it's still a debatable point whether the results are intuitive (eg. HTTP URLs for XML namespace identifiers which may or may not be dereferenceable).

HTTP URLs took advantage of DNS as an existing globally deployed resolver, coupled with a universally deployed path resolver (the web server) the rest was history. You could create a URL scheme called "hash" but it would be hard to see how you could design a standard resolver unless it was one big centralized hash table in the sky - you still would need to, at the very least, map objects to IP addresses.

grondilu
> humans like names, and like hierarchy.

They do, but that does not mean there should not be other ways to access data. Hashes are universal and unambiguous. There should be a way to retrieve a file given its hash.

> You could create a URL scheme called "hash" but it would be hard to see how you could design a standard resolver unless it was one big centralized hash table in the sky - you still would need to, at the very least, map objects to IP addresses.

There would be an underlying P2P protocol that cp would use. On the other hand, cp doesn't even use FTP or HTTP so maybe that's too much to ask.

Maybe with curl or wget, then.

oakwhiz
BitTorrent magnet links already kind of do this.

Theoretically speaking, isn't it possible to create a virtual BitTorrent FUSE filesystem?

wpietri
Why should there be that? You're talking about an enormous, complicated system. What's the use case that justifies the effort?
parasubvert
> Hashes are universal and unambiguous. There should be a way to retrieve a file given its hash.

I'm not sure you've thought through the complexity of what you're asking for.

Hashes require (a) a hash function everyone agrees to, (b) a way to resolve them to an IP address.

Unless you synchronized all global hashes across the Internet on everyone's computer (the git hashed project model -- which we know doesn't scale beyond a certain point unless you bucket things into independent hashes you care about), you'd basically have to do something like Hash://ip_address/bucket/hash or hash://bucket/hash if you want to give a monopoly to one IP address that manages giant hash in the sky

Which is back to URLs and HTTP, and no different from say Amazon S3

serve_yay
I'm guessing this is about what a disaster it is to use software, still to this day, but I just can't deal with his storytelling style. Takes forever to say "OpenOffice has a shitty bug".
sergiotapia
What is this about?

Edit: Strange Loop is a multi-disciplinary conference that aims to bring together the developers and thinkers building tomorrow's technology in fields such as emerging languages, alternative databases, concurrency, distributed systems, mobile development, and the web.

Strange Loop was created in 2009 by software developer Alex Miller and is now run by a team of St. Louis-based friends and developers under Strange Loop LLC, a for-profit but not particularly profitable venture.

anigbrowl
Bloat.
trenchwarfare
new to coding, and volunteered for a couple shifts in exchange for entry. first conference ever. this was AWESOME! incredibly well run, and obv not about the $$ - but they freaking deserve all the success they get, financial and otherwise!
None
None
_asciiker_
this is so true.. we already spend a lot more time fixing and tweaking code than actually creating.
readerrrr
Wow this guy doesn't know what he is talking about. Just a bunch of numbers without any arguments.

I had to stop watching when a laptop was compared to a black hole.

I'm sure the laymen are impressed though.

jacquesm
Your bio bit says:

> Please comment if you downvote. Or even better; just comment.

So here you go: you are utterly clueless, before you write comments like these do your homework or you will get downvoted a lot.

Or even better; just don't comment. Until you've done your homework, that is. It just increases the noise and does not add to the conversation at all.

raspasov
FYI this guy designed Erlang :)
mlvljr
We are.
orbifold
I think you missed the point, that slide was about the theoretical limits of computation, it is a very weak upper bound that won't be achieved.
habitue
So, while in general I've enjoyed other things Joe Armstrong has written, I think this talk is pretty discombobulated and doesn't have a coherent narrative.

Here are some of the problems Joe posed:

  - bugs in software like Open Office, Keynote, grunt
  - code not being commented
  - computers booting slowly
  - computers using too much energy
  - code being written for efficiency rather than readability
He talks about distributed hash tables at the end. An interesting topic, definitely cool, but they have nothing to do with the problems he posed earlier.

This seems more like a disconnected list of gripes, plus a completely unrelated list of things he currently finds neat to think about. Which is totally fine, but I don't think it makes a particularly great talk.

3rd3
There are a couple of things that seem incoherent, but maybe I'm missing something:

- Is the only reason why he touches upon the limits of computation and computing efficiency that it secures distributed hash tables and that there is space for improvement in terms of energy consumption, respectively?

- It seems contradicting that he advocates biologically inspired systems and lowering entropy at the same time. Aren't biological systems even messier than current computer systems?

- Wouldn't the 'condenser' very likely require AGI to be of any use for us?

XorNot
DNA is a posterboy for spagetti code.

You have a function, it codes for a gene. Only then you have it's anti-sense translation, which can also code for a gene. And then you have post-translational processing, which takes that gene-product and makes into any number of other things. And then you have DNA binding proteins which effect readability, so that gene can code for a different gene when the normal start/stop codons are made accessible or inaccessible further up the DNA strand. And then the whole program also grinds to a halt if you remove any of the "junk" because the junk is used to control execution (translation) speed and inhibit the program from self-destructing (cancer).

Zigurd
Who designed that? Unbelievable.
dj-wonk
A wise old graybeard who wrote the book on evolutionary and genetic algorithms.
mattholtom
Came here to see this. I expected to see a well reasoned argument for functional programming and how entrenched the OO mentality is. What's the mess we're in again?
contingencies
What's the mess we're in again?

Capitalism. Nobody gets paid to think about the big problems.

dj-wonk
Complexity, the Destroyer of Simplicity. (Joe's talk is broad, perhaps intentionally so, and hopefully will promote some big-picture thinking.)
jacquesm
I think it goes with the title of the talk. 'The Mess we're in' is not limited to any one of those items in particular but a total view when you add all of those up. Bugs, badly documented code, slow boot, energy consumption and hard to understand code all contribute to the mess we're in. And it's far from an exhaustive list.
pjmlp
Many of those would be fixed if there was a real concern for quality with corresponding responsibility when things go wrong.
jacquesm
I think the bigger issue is the existence of 'disclaimers'. Software production is the only branch of industry that I'm aware of that is capable of getting out from under manufacturing defect claims in that we state categorically (as an industry) that we have no responsibility, liability or even obligation to fix in case we ship a defective product. That really needs to change.
tormeh
This is why companies that really need good code have internal developers.
RickHull
Disclaiming liability and such is an important F/OSS norm. Of course, proprietary software will improve on this (only) if forced to by competition. The omnipresence of EULAs is a much bigger problem, though. I think the F/OSS norms are better all around.
jacquesm
> Disclaiming liability and such is an important F/OSS norm.

Indeed. So there can be a market for companies that take on liability when serving commercial customers using F/OSS code that they have audited and that they feel exposes no more risks than they can bear. The original authors should definitely not be liable if they label their code as alpha or beta quality and do not wish to be exposed at all. They are doing a service to society. But once you aim your code at being used in production by entities that can suffer vast losses if your code turns out to be defective (in other words, if you sell your stuff to a business or private person) then you should be liable for those damages, or at a minimum you should insure against those damages.

Compare software to for instance the engineering profession to see how strange this software anomaly is.

davidw
One way or the other, the problems with software are mostly a matter of economics and incentives.

Computers and software are the way they are because of the set of tradeoffs that the market rewards.

It's certainly possible to write software with fewer bugs, that consumes fewer CPU cycles, memory, starts faster etc...: but it does less. So far, most people and businesses prefer software that does more at the cost of slower boot times, more CPU usage, and a few more bugs.

jacquesm
I think it is mostly a lack of choice. If everybody does it then 'the market' becomes a de-facto monopoly and someone trying to do it right would not stand out in a meaningful way until it is too late. After all, all software is presented as 'bug free' until proven otherwise. Your bug free (really) software looks just as good as my bug free (really not) software on the outside.

Six months down the line, when my not so bug free code eats your data I will point to that line in my EULA that says I'm not liable. Nobody will care, after all it is your data that got lost, not theirs. The fact that your EULA does not have that line and that you offer a warranty does not count for anything until someone would be willing to pay a premium. The only people that would like to pay that premium are the ones that lost their data...

So it's an industry phenomenon. Imagine extrapolating this to buildings. Engineers claim their buildings will stand. Those engineers that talk nonsense will be sued out of business. But if they could disclaim responsibility they would continue to happily practice their borked trade and as a rule people would suffer from this. And so engineer became a word that actually meant something.

But in software 'engineer' is roughly equivalent to 'can hold keyboard without dropping it'.

davidw
> Your bug free (really) software looks just as good as my bug free (really not) software on the outside.

No, actually, it looks a lot worse: given the same time and developers, the bug free software will do way less than the buggier software. That, or at feature parity, the bug-free software takes more time and/or requires more people, so arrives later or costs more.

I don't have any direct experience, but I suspect there are niches here and there where the market and/or regulations put a premium on no bugs. Avionics? Some categories of medical software?

jacquesm
Being able to sell software has precious little to do with the actual product but everything with marketing. So my crap software might (on the outside) look even better!

You can only tell good quality software from bad quality software by auditing the code, not by observing the software from a users perspective (unless it refuses even to perform the basics).

davidw
Observing the software from a user's perspective is all that counts, though. Marketing is important, yes, but if you're in a niche where quality counts more because bugs cost your users money, then people will sit up and take notice, eventually.
tjr
Some industries have decided that software really does matter, and go to greater lengths to make sure it works.

It'd be annoying if Things for iOS crashed and lost all of my data. It'd be horrifying if flight control software crashed and all aboard a plane were killed. It stands to reason that some software is and should be held to higher standards than other software. It probably doesn't make sense that all software should be held to the same high standard, as it is extremely time- and resource-consuming to ship avionics software. Do folks really want to dish out a few $thousand for a copy of Things for iOS?

And some companies do already take responsibility for open source software. In aerospace development, we routinely use GNU software that has been thoroughly inspected and certified as good by companies that accept many thousands of dollars to stand behind it. (Of course, if we were to upgrade from their GNU Foo 2.1.0 to the FSF's copy of GNU Foo 2.2.0, then all bets are off.)

jasonlotito
Granted, then the price of such software will skyrocket. The price people pay for most software accounts for the fact that such liability is not covered. Couple that with common software development practices, as well as time invested.

It's not merely accepting liability, there are a whole slew of changes that need to come before this, and frankly, I doubt most people would pay for that. Indeed, if people want to be covered now, they can be. They just have to pay for it.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.