HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
The Only Unbreakable Law

Molly Rocket · Youtube · 138 HN points · 9 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Molly Rocket's video "The Only Unbreakable Law".
Youtube Summary
There are promising candidates for "laws" governing computer software. But are there any specifically for software architecture? In this lecture, I describe the only viable candidate I've so far seen.

Originally given to the School of Informatics of the Technical University of Madrid.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
desktop environments (from Windows to iOS, including OSX, macOS, KDE, Gnome and everything in between) sort of already do this. they provide services and they provide apps/programs that require those services.

what they don't provide is interchangeability of programs from different environments. probably the closest thing is the freedesktop.org semi-standards, but there's not much compatibiltiy or interoperability between KDE and Gnome stack. (KDE's Plasma needs all the usual K-things and Gnome's thing needs, well, I don't know what it needs, but it does almost nothing but needs a bunch of things to do that :D Xfce sort of piggybacks on Gnome, but also uses a lot of their own shit, but at least Xfce is great.)

basically this is due to Conway's law:

https://www.youtube.com/watch?v=5IUj1EZwpJY&t=2153s

because there's no counteracting force that's pushing back to make things use a standard :/

Not even to mention the impact Conway's Law suggests! As a company grows in number of specialized teams, you'd expect to see magnifying difficulty in having efficient communication, eventually impacting what the product looks like (or at least, how robust the code is and how easy it is to augment). Seems like it could be a bad feedback loop - code got more complex indirectly because of number of teams, so naturally you hire more people to maintain the same output.

See this video by Casey M. https://youtu.be/5IUj1EZwpJY

> the major foot-gun that got a lot of places in trouble was the premature move to microservices

I sometimes wonder if the move to microservices isn't just a weird consequence of Conway's law in reverse: make a department of each developer, let them have their thing.

(See also this amazing video about Conway's law: https://www.youtube.com/watch?v=5IUj1EZwpJY )

mjdiloreto
This is absolutely what microservices are about. It's arguably their strongest strength, because (at least in theory) I can decouple my team from your team and we can _only_ communicate over a strict interface.
treis
You can do that without introducing a HTTP/RPC boundary.
mejutoco
You can, but it requires discipline and/or tooling. With microservices you are very incentivized (I would say forced).
mjdiloreto
Exactly! To be clear to parent commenter, I'm not endorsing microservices to solve this organizational problem, just pointing out it's part of the reason to choose microservices.
treis
>You can, but it requires discipline and/or tooling.

Pretty much every language comes with a way of exposing a limited API to other parts of the application. Java, as an example, requires you to specifically export the parts of your module that other modules are allowed to consume. If you only export a public API then you've achieved the same benefit as a microservice except now it's type checked and doesn't encounter the pitfalls of a network call.

mejutoco
I agree with you. There ara ways, and they work. If you have different teams stepping on each other toes, they might be disincentivized to keep the separation. Ideally they will not, but without someone enforcing it a team might end in this situation. I see it as a potential social question (like Conway’s law).
okay. so. a bit offtopic, but ... how come it's 2022 and Microsoft, with all its glory and industry leading best practices and trillions of microdollars of Azure-colored dollars valuation ... doesn't have a library for this? you know, the company that makes the thing, the suite that is used worldwide, underground, spacestationside, all the 365 days of the year.

I mean it's no wonder the introduction of the computer doesn't show up on the GDP charts when the industry is in fucking shambles and isn't even ashamed for it ... https://www.youtube.com/watch?v=5IUj1EZwpJY&t=35m40s

:|

Interestingly this talk by Casey Muratori [1] makes a reasonable rhetorical case that Amdahl’s law (about process parallelization), Brooks’ law (about team network communication) and Conway’s Law (about how organization communication shapes software structure) might all be variants on the same basic principle of trade off between parallelizable and unparallelizable work…

[1] https://m.youtube.com/watch?v=5IUj1EZwpJY

Great article. I also suggest watching Casey M. recent video[0] about Conways Law. How organization in your company determines what you can/will do and how organization of your code follows the same principles.

Starting with just the handlers and introducing layers as you make sure its time is great way.

[0]https://www.youtube.com/watch?v=5IUj1EZwpJY

Funny, just watched Casey Muratori cover the same topic on his channel: https://youtu.be/5IUj1EZwpJY?t=699

I thought he covered it well. He considered the temporal aspect of orgs, which meant that software, over the years not only mimic the org, but every org that had ever been there over time.

edit: oh, and unexpectedly, he's still around and on twitter. https://twitter.com/conways_law

cma
Interestingly Twitter's org chart, or something, crept into Conway's work as well, producing this tweet-unroll-as-a-pdf communication monstrosity:

https://melconway.com/Home/pdf/politics-emergence.pdf

A good illustration of The One Unbreakable Law https://youtu.be/5IUj1EZwpJY (Conway's Law, but do watch the talk for more context)
Mar 18, 2022 · 1 points, 0 comments · submitted by DeathArrow
Mar 18, 2022 · 3 points, 0 comments · submitted by ifree
Mar 17, 2022 · 115 points, 98 comments · submitted by ivank
beebmam
A law that doesn't make predictions isn't a law. A law that is not falsifiable isn't a law. It is an unscientific belief.

It's truly incredible to me that people, like the person in this video, can speak with such confidence about how, for example from this video, "if we look at an org chart for an organization, and we look at the structure of the products that it produces, we would expect them to basically just be collapses of each other [i.e. a homomorphism]". Also known as https://en.wikipedia.org/wiki/Conway%27s_law

A sincere person in search of truth asks questions like the following when they encounter a claim:

- can we think of circumstances where this law is not true?

- can we test this claim to show that it is true?

- can a test be devised which would falsify this claim?

- if this claim is true, what are the mechanisms of action for the claim?

- if this claim is true, are there any contradictions that would arise with other things we know are true?

Using the same metaphor used in this video, if the currently recognized law of gravitation (General Relativity) made predictions which were different than what is observed, then that law is wrong. And a scientist would adjust their law to reality and be more than willing to point out the gaps of explanation in our law of gravitation (which they do).

If we're serious about Computer Science (and Software Engineering) being a field in pursuit of truth, we should be as rigorous and critical as other fields of science and engineering when it comes to making claims.

1970-01-01
He spent a few minutes explaining how this isn't a scientific law. I think you missed it. Go 6 minutes in.
squeegmeister
Doesn't Conway's law make predictions and isn't it falsifiable? You pointed it out yourself "if we look at an org chart for an organization, and we look at the structure of the products that it produces, we would expect them to basically just be collapses of each other [i.e. a homomorphism]"

You could validate these claims by looking at org charts across various companies and looking at their software architecture and come up with some measure for how closely they resemble each other

holyyikes
Speaking with confidence about subjects he doesn't understand is his whole schtick. All he talks about is game development when he's never shipped a game (and no, working on some tiny component of one of Blow's games, which wasn't even a good implementation, by the way, doesn't count as shipping a game).
ukj
Please spare us from truth-seeking and leave that to the philosophers.

In science/engineering we care about instrumentalism, not truth.

All models are wrong. Some are useful.

beebmam
So let me try to understand what you're saying here, and correct me if I'm wrong.

Is this a mischaracterization of a subset of your claim: "Scientists do not care if their models are truthful, but they do care that their models are useful."?

jchw
Well, kind of. Models are inherently not absolute truths. Otherwise, they wouldn’t be models.
hcrean
Or is there just a correlation between truthfulness and usefulness of a model...
ukj
No. “False” models can still make accurate and useful predictions.

The Ptolemaic/geocentric models still work, even if the mathematics are a bit unwieldy.

bombcar
They work well enough that navigation can still be done by them (apparently the tables/math is easier than the “correct” ones).
ukj
Yes. Something to that effect.

Truth is a philosophical notion. It is not the concern of science.

I am aligned with model-dependent realism; or epistemic constructivism or thereabouts.

https://en.wikipedia.org/wiki/Model-dependent_realism

https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_...

beebmam
On a scale of 0 to 100, where 0 is no confidence and 100 is 100% certainty, how confident are you in your belief of this philosophy of Constructivism?
ukj
Sorry, I am unable to assign any meaning to your question.

I don’t have confidence IN my beliefs. I have confidence in the usefulness of my mental toolbox.

On a scale of 0 to 100 how confident are you in your belief of this science?

nmaleki
> All models are wrong. Some are useful.

While this may hold true in almost every case. Just as truth is not absolute, neither is "wrongness".

Furthermore, there exists some model which is neither right nor wrong[1]

[1] https://publications.recursion.is "When all contradictions are resolved what will be left is - an unprovable truth"

ukj
Why have you chosen a system with a non-contradiction axiom?

Why didn’t you choose dialetheism?

Seems like an arbitrary choice… Could it be that you are culturally biased in the orthodoxy of Western philosophy dating all the way back to Aristotle?

https://plato.stanford.edu/entries/dialetheism/

nmaleki
https://recursion.is has a running list of quotes, one of those quotes is "True and False" - this is the central idea behind dialetheism and is an integral part to the Recursion Convergence Conjecture. I have already recorded audio for a video on the topic soon to be released on https://recursion.is/youtube
ukj
Let me generalise.

There are systems of reasoning which don’t only concern themselves with Booleans (true and false). There are infinitely many non-Boolean types possible in type theory.

Human constructions/models have many other interesting and desirable qualities/properties beyond “truth” and “falsity”. Mathematicians/Logicians/Computer scientists are deeply interested in understanding those semantic properties.

Recursion is one of those properties studied. It is a foundational concept, not the be all - end all.

Convergence and divergence are another example of interesting semantic properties. They are studied in bifurcation theory.

https://en.wikipedia.org/wiki/Bifurcation_diagram

Whatever your conjecture you are only constructing a lens through which you are interpreting the world.

nmaleki
Yes. I fully agree with all of these points.

Why do you point these specific ideas out? I do not, yet, understand the purpose behind your words

ukj
Well, that is somewhat peculiar…You seem to be agreeing with me disagreeing with you.

From the yardstick of “specificity” my critique of your conjecture is precisely on the grounds that it is too specific; or not general enough.

In particular - the purpose behind my words is to understand why you are constraining thought/expression to merely recursive convergence and to the detriment of recursive divergence.

Why the arbitrary specificity?

In general - I don’t understand the purpose of your conjecture either.

nmaleki
The insight you have given me today amazes me.

My original paper was not titled Recursion "Convergence" Conjecture, it was titled "Quantum Recursion Postulate". I later realized I was incorrectly conflating popular use of the word "quantum" with "superposition". So, I re-titled my paper.

This felt like a good decision. Convergence is one of the central ideas to the paper-I want others to disprove the paper to converge on a solution. The focus of my paper was not quantum mechanics, nor was the paper describing only a small system. A understandable mistake for someone who has no formal physics background, and has only conversed with those who do.

Of course, this re-titling had the now obvious effect that I lost the diverging aspect of the paper. To further illustrate how I managed to damage the original meaning behind the paper, I will show you perhaps the only proof I have that my conjecture also includes divergence: "The more information considered, the more likely the solution"[1] "Once all is explained, there will always be more perspectives you can attempt to explain that which is already explained from. There will always be new layers to explain previous layers with. Seeking these new layers is what is important, as you can use them to help others understand what you already do. These others, which may want explanations, will stem from these new layers and only understand these new layers."[2]

Looks like I need to change one of my scripts (which I already recorded the audio for >.>) and re-title my paper to the "Recursion Conjecture". Thank you!

[2] https://notes.recursion.is/Philosophy/It+is+what+it+is

ukj
Well, I am glad you are finding my commentary helpful but I am still no closer to understanding the reasons for your arbitrary choices.

Why is it a “recursion conjecture”?

Why isn’t it a “Corecursion conjecture”?

nmaleki
https://recursioncorecursion.is does not have the same marketing effect that https://recursion.is does :P

Additionally, I admit that I was not aware of corecursion prior to this conversation. I have been referring to the two as the same concept this entire time. I do not have access to all that is knowable. I did not go to the same universities/* you did that taught this concept. To further illustrate this, "corecursion" has About 18,100 results on Google. "Recursion" has About 20,900,000 results. All YouTube videos on the subject have sub-1k views; since your comment, I have watched many of these videos.

I am prepared to say that corecursion is an integral part to the conjecture; just as important as recursion.

Lastly, it is my understanding that they are, in a way, each other. Ask yourself how you define both and then ask yourself what is the major difference between the two. Are you certain that your definition of both is absolute? Could it be that one could stem from the other in some case? I'll give you a hint, I am being entirely rhetorical. They do stem from one another. Where this stemming-process has *no origin*[3] in an *infinite* system - neither idea can be seen as the "top-level" idea. They are both crucial.

This conclusion may bother you, and I will release a more rigorous argument soon. Corecursion will be a/the topic of one/all of my future videos.

PS: This conversation allowed me to think I understand how type theory integrates with mathematics, to my own personal plague of not internally using concepts I don't think I fully understand

[3] https://notes.recursion.is/Philosophy/Origin

krona
Truth is just that which corresponds to or is a reflection of reality. Anyone who thinks otherwise is thinking too hard.

The true nature of reality might be unknowable, but science is a means to move towards it.

Dudeman112
And that's how you know someone got way too drunk on other people's ruminations without first hand experience.

Of course scientists care about their models being close to/matching reality. Have you ever spent any amount of time in an university's lab?

Approximately everyone I've met in my alma mater cared about their models being close to reality (or as truthy as they can be).

ukj
And that’s how you know somebody has only second-hand experience of science - having never reflected upon, or examined the (very human) limits of the knowledge they have been given by others.

What scientific procedure would you use to establish whether one working scientific model “matches” reality; while another working model doesn’t? Nobody has direct access to reality outside of their own theoretical paradigm of understanding.

It isn’t just me saying such “crazy” things…

https://en.wikipedia.org/wiki/Model-dependent_realism

Dudeman112
>What scientific procedure would you use to establish whether one working scientific model “matches” reality; while another working model doesn’t?

See what model1 gives as a prediction for input X. See what model2 gives as a prediction for input X.

Input X in the world and compare what actually happened.

The models that more closely predict the observations are more likely to reflect reality (aka being closer to the truth).

You don't have direct access to reality. And yet if someone thinks F = ma isn't closer to the truth than F = ma^4 (for the usual symbols and approximations and blah blah, assume I'm aware Einstein existed) then they got way too drunk on other people's philosophies.

Most scientists I met and worked with care if their models are close to reality/truth. Epistemological uncertainty does not mean every model is equally untruthful.

ukj
You haven’t really tackled the strongest interpretation of my question head on.

If two different models/theories are observationally equivalent; if two (or more) different curves can be fitted to the same dataset which model is “more true”?

https://en.wikipedia.org/wiki/Observational_equivalence

https://plato.stanford.edu/entries/scientific-underdetermina...

And how do you go from “F=ma for the usual symbols…” to solving the symbol-grounding problem? Symbols are epistemic entities, not ontological entities.

https://en.wikipedia.org/wiki/Symbol_grounding_problem

The model works. That is all there is to be said about it.

If you begin asking questions such as “is the model true or not? Do the terms in the equation correspond to real world entities?” you are no longer doing science, you are doing philosophy.

Dudeman112
>If two different models/theories are observationally equivalent; if two (or more) different curves can be fitted to the same dataset which model is “more true”?

You don't know which one is more likely to be true (all other things being equal). However, if all else isn't equal and you are well acquainted with a Mr Bayes you can probably do all sorts of fancy Mathematics to estimate which model is more likely to be true. Or just go with your hunch. Or find yourself a dataset that will invalidate one or both of your models.

Mind you, I don't think this is an issue for most subjects. How often do you get different functions that spew the same output in the all observable and theoretical instances?

>If you begin asking questions such as “is the model true or not? Do the terms in the equation correspond to real world entities?” you are no longer doing science, you are doing philosophy

Yes, people often do that when they are engrossed in a subject. Most scientists I met care about the subject they are working on and philosophise a lot about it. Some don't anymore, possibly because working in the academy should be considered a carcinogenic hazard by the World Health Organization.

When talking about scientists people sometimes forget that they are human, often driven towards the career by just wanting to know. They care if their models match / are close to reality to the best of their knowledge.

Do remember that we are talking about scientists, not science. Almost everyone I met in my short time in the academy fundamentally cared if their models closely matched reality. Most of them didn't get where they ended up at for utilitarian reasons, and lots of them are on a permanent state of annoyance at not being able to fill some forms with "I'm just curious" to get funds.

ukj
>However, if all else isn't equal and you are well acquainted with a Mr Bayes you can probably do all sorts of fancy Mathematics to estimate which model is more likely to be true.

But you still need to tell Mr Bayes what your criteria are for model-selection.

So the output of your calculations is some scalar "truth-value" (higher is better) - what's your input?

In simpler terms: the truthfulness of your model is a function of... what ?

>Mind you, I don't think this is an issue for most subjects. How often do you get different functions that spew the same output in the all observable and theoretical instances?

In curve fitting? Practically all the time! It is a mathematical fact that infinitely many curves fit a finite dataset. So we pick curves that fit approximately, not exactly with the obvious underfitting/overfitting optimisation problem that needs solving in conjunction.

Ygg2
> Please spare us from truth-seeking and leave that to the philosophers.

What? Parent is litteraly saying don't trust a model that has no predictive power.

You're saying leave model evaluation to philosophers. Last time we did that we did that we had theory of four elements and phlogiston.

ukj
No. I am saying leave truth/truthfulness to philosophers.

Every model has predictive power. Even the God model.

It predicts something existing; rather than nothing. Of course, it is a worthless prediction but it is a prediction.

Ygg2
> No. I am saying leave truth/truthfulness to philosophers.

I think you mean scientists. I.e., construct an experiment validate, etc.

God model the least possible predictive power imaginable.

Also, according to philosophers, God must exist because it's a perfect being. And a perfect being must exist because it is perfect. They are like people left in a sensory deprivation tank hallucinating random patterns into a sensible picture.

ukj
No. I mean precisely what I said.

Science does not pursue truth. Philosophy does.

https://en.wikipedia.org/wiki/Instrumentalism

Ygg2
Philosophy divorced from experiment just devolves into navel gazing and batshit insanity.
ukj
And empiricism divorced from philosophical reflection fails to understand its own limits/biases.

It is still navel gazing batshit insanity, but if it works well enough nobody questions why it doesn’t work better.

jchw
The more I think about this comment, the more I feel it is missing the point. They do expound a fair bit on the term “law” but even admit both that the title is a bit flippant (and not the one they originally wanted to go with) and that their formulation of the concept is not yet meeting criteria necessary to consider it a “law,” only that they believe there probably is an underlying law.

Like many have said in the past, computer science is not really “science” or even engineering. And if that’s true, then software architecture really isn’t science. It’s closer to a soft science if anything. There may not really be a meaningful definition of “law” and that degree of rigor may not be very easy to accomplish. After all, there’s hardly anything objective about it.

However, that doesn’t mean that observations about it are not interesting. I certainly think Conway’s law is interesting despite that it may not meet the criteria to be called a “law” in a harder science.

lliamander
I agree with you general about the point of the article and the validity of Conway's Law, but I have a quibble:

> Like many have said in the past, computer science is not really “science” or even engineering.

Computer Science is absolutely science, as much as physics is. Science is the ability to make precise predictive models of the world using math. Computer Science provides plenty of those. For example, by knowing the binary search algorithm we know that the time it takes to do a git-bisect is bounded by the log of the number of commits in a repo. We can also know what kinds of encryption keys could be discovered through brute force guessing given presently available hardware. We can precisely describe the kinds of coordination problems that can afflict concurrent processes (not just in computers) and the kind of mechanisms that can be used to avoid those problems. All of these add knowledge to the world of the same type as the laws of magnetism or gravitation.

Computer Science is the science of process. Software engineering is the application of that science to messy human problems, and is no less an engineering discipline than any other. Like other engineering disciplines there will always be room for heuristics and human judgment. Conway's Law absolutely falls into that.

The precision of any engineering discipline will always be limited by the precision of the softer sciences, as the phenomenon they describe are an ineradicable aspect of the engineering process.

erikerikson
Outside of the U.S.A. (notably in the U.K.) the computer science department is known as the informatics department. This makes the larger claim that the field's interest is information theory. Consider the information theoretic guarantees of total order broadcast and it's equivalence to consensus. Consider also the relationships between automata and languages, computational complexity theory, the relational calculus. Many proofs of correctness have been made over many matters in the field and these are expected to remain as true as the laws of physics.
lliamander
You're absolutely right about information being a key part of computer science. I don't really know how to articulate the relationship between "process" and "information" but it makes sense to me to group them together, and I do like the name "informatics" to describe the science.
belugacat
This is kind of getting into weird semantics.

For what it’s worth I hold a BSc from a large, reputable UK university in computer science, not informatics, so it’s not as universal as you suggest.

I do also hold an undergraduate degree from a French university in informatique, a contraction of “information automatique”. Both words are equally important in the name of that discipline, and the “automatique” part is very much about process.

But we are debating the map here, not the territory.

erikerikson
Thank you for the correction, I should have used softer language. My sample size is small but my edit link is gone. To clarify my meaning, I don't mean to contradict process being core to the field but rather that process is within what I consider a more broad scope for the theory of information. While we could talk about the process involved in proving the equivalence of an automata to a language it feels like a stretch to call the fact of their equivalence a part of the study of process itself.

If you're game, I'm curious what seems weird about the semantics?

Not just the map but the terms written on it. :D

brown
Nice video. Conway's 1968 paper is a good find.

The conclusion is slightly defeatist, but ultimately correct. At time 49:23, Casey says "But we have to do them right now, because we haven't figured out how to do it better."

[excessive modularity] is the worst form of [software development] – except for all the others that have been tried.

andrelaszlo
Perhaps it's a desperate attempt to defer the problem of what the organization should look like. The more nodes you have in your graph, the easier it will be to collapse it into something that maps to the organization you want?
galangalalgol
Sometimes it feels like a desperate attempt to defer working on some part of the problem a team doesn't know how to solve. Usually some domain specific thing no one wants to think about. So onion layers get put in wrapped around that bit without actually solving it, just creating an abstraction by which the rest of the system will use the "magic". But often the abstraction overconstrains the domain specific magic through ignorance, so it all has to change when someone gets around to adding it. Seen this over and over.
0xbadc0de5
Having watched Casey's Handmade Hero series since day-1, I've always found him to be highly skilled and insightful. While not a game developer myself, learning from his approach to first-principles software development and code optimization has paid dividends in my day to day work nonetheless.
Ygg2
I've found him passionate and smart. But rarely right.

Like him bashing SOLID principles. It read like a man arguing against hammers, and instead suggesting using drills (which is fine if you need to drill a hole but bad advice if you want to hammer a nail). Like yeah, SOLID is over used and over-stated, but they were invented to stop certain set of problems.

jdougan
With both Casey and Jon Blow I find that if I mentally prefix what they say with "When developing AAA games..." then they are almost always right. Much of it doesn't transfer far outside that domain.
holyyikes
What are you talking about? Neither one of them has ever shipped a AAA game.
Ygg2
I think it's charitable to read that as shipped a relatively performant game.
holyyikes
Actually the funny thing is that Braid's CPU usage was always kind of high for what it was doing. Not sure about The Witness.

The emperor definitely doesn't have any clothes though when it comes to either one of them.

ImprobableTruth
The most bizarre part is that he took issue which the Liskov substitution principle which is just the intuitive definition of what semantic subtyping means. This isn't some crazy clean code "functions should have less than 20 lines" dogma, subtyping without semantic subtyping is just nonsensical.
miltondts
> SOLID is over used and over-stated, but they were invented to stop certain set of problems.

The problem is, when was it ever shown they solve any problem? Where are the measurements showing less dev time or less bugs or better performance? Where even is an algorithm to show your software is SOLID? People can't even agree on what those mean.

teddyh
SOLID is for long-term development and use software. Games are developed during a relatively short time, and then released, and then not changed very much. The next game is developed independently, as a separate project and effort. SOLID is meant to alleviate continous development where changes and new features are constantly needed. Game development does not easily fit in this mold.
holyyikes
Tell that to World of Warcraft. Really is hilarious just how goddamned clueless most of the comments in this thread are.
teddyh
Please don’t misunderstand me; I really think that SOLID is good to follow. My comment was more meant to express understanding for people having doubts about SOLID in situations where the SOLID principles don’t give as much of an obvious benefit.
hahamrfunnyguy
I would say it's more like a builder arguing against building codes. A high-quality building can certain be built without them, but sticking to some known formulas makes sure that everyone working on it is able to anticipate what others are going to do. It also makes sure that the building won't collapse, though this aspect doesn't really apply to software.

Software has gotten a lot more complicated since the 1990's and from my experience, projects where design patterns are used effectively run a lot more smoothly than those where they're not used or not used effectively. It's great when you can open a project from 15 years ago and say "Actually, the code isn't too bad!" because the developers followed some rules of thumb.

I agree with Casey that these rules of thumb aren't going to lead to higher-quality software from the end user's perspective. I doubt that they're going to make it worse though, unless they're blindly followed.

krapp
On the other hand, Casey's been working on Handmade Hero for almost a decade, and has what amounts to an overengineered debug room with less actual gameplay complexity than you'd find in any framework tutorial. Meanwhile actual game developers would pick up a framework, use all those abstractions and libraries Casey finds unnecessary and surpass Handmade Hero in an afternoon.

There's something to be said for not wasting time on things that don't matter to anyone but yourself, and code aesthetics is often one of those things.

leoncaet
He works on HH for about 2 hours per week.
holyyikes
That's not true. He works on it offline, and honestly, even if he did only work on it 2 hours per week, it's STILL not as far along as it should be by now for a supposed game development "guru." You got hoodwinked.
krapp
That's still far too much time for far too little progress. Six hundred videos and counting, each over an hour in length. What is there to learn except how not to develop a game, much less (as I recall him claiming) a AAA quality game.
holyyikes
What's hilarious is that if you actually look at the code he's written for it, it's a horrible ball of spaghetti. The whole thing is a scam. Just because HE says "this is bad code" doesn't mean it's bad, and just because he wrote it, it doesn't mean it's good. Most people would find the clusterfuck he wrote unmaintainable, and the funny thing is that it doesn't even really DO ANYTHING YET.
Jadinette
The goal of Handmade Hero is to show how to do everything from scratch, not how to make a game as fast as possible using existing technology. I agree that the game is currently light on the gameplay side of things, but he said countless times that he doesn't like gameplay code and that he's an engine programmer. I don't know if you watched a lot of episodes, but actually, each hour of him programming is full of complicated stuff which is hard to follow, but he does that in the blink of an eye. Each time you think that you understood something, he starts doing something way more complicated and he always nails it. Look at how many programming streamers are able to speak 2+ hours continuously without any cut. Most of the youtubers have to look at their source code written before turning the camera on to be able to retype it during the video! Even for a simple Tetris or something!

I'm curious to know what you think is unmaintainable in his codebase. Sure he uses alternative little-known techniques for say, memory management, but once you know what is going on, the code is pretty clear I think.

He posted an issue about the Windows terminal being slow, and proposed simple things to speed things up. What happened? The Windows Terminal team declared that what he proposed is an entire doctoral research project that would be a massive investment. He did the freaking thing in 2 days and said that it's "nothing" and "very simple". The team then apologized for being dumb and is now working on implementing his idea, that they described as "original" and "very valuable"

holyyikes
None
3np
I have no opinion on them as this is the first time I see them but their misrepresentation of Amdahl's law bothered me as their version makes it incorrect to the point of being useless. Here's how they present it:

  T(n) = b + (T(1)-b)/n

  T(n): Time to run task for n parallel threads (workers)
  b: Time it takes to run part of task that can not be parallelized

  Therefore:

  T(inf) = b
This misses the cost of coordinating between workers. It also removes the key part of it being a theoretical limit of the speedup as resources increase.

Their attempt to simplify it makes the new version dangerously wrong if you take their word for it.

Less wrong:

  T(n) >= b + (T(1)-b)/n
Or even (but now it's not really Amdahl's again):

  T(n) >= b + (T(1)-b)/n + C(n)
In fact, for many problems T(n) > T(n-1) for some n, as at some point C(n) > (T(1)-b)/n

This is not really "a more subtle improvement", "new version", or "refinement". It was known in the field in the 70s. That is, Brook's Law can apply to parallelized computation, not just to teamwork. Which OP observes but still doesn't make them see the errors in their previous assertion.

Sinidir
Yeah his claims about good and bad are too unconditional and generic. He always looks at it from his specific vantage point and then declares something categorically bad which ticks me off cause its so dogmatic and lacking of contextuality.
teddyh
Not to mention that the “S” in SOLID is closely related to what he is talking about here, namely that a software component ought to be related to only one specific source of changes – i.e. a box in an org chart – in order to minimize the risk of it being changed badly and consequently break something. This implies that you should have a connection between the org chart and the code, in order for the changes in the code over time to not constantly break the code.
unyttigfjelltol
The law holds for human orgs for a reason beyond "communication". No business unit that is asked to contribute to a project would allow its contribution to take a form different from a discrete, quantifiable module. The 5-person team in the lecture came up with a 5-pass compiler because none of the 5 people were willing to have an unquantifiable contribution indistinguishable from having not shown up to work in the first place.

The case with software classes and components is different. None of those inanimate object care if their contribution is measurable, so the law does not apply as strongly with respect to them.

phtrivier
Even more depressing than "The 20 million lines problem" - because the gist of the talk is that programming with more than one person is doomed to fail, and programming with less than two is even worse.
holyyikes
What a load of crap. The whole idea that everything is terrible and we're all doomed is so ridiculous when you can clearly see that people have shipped oodles of successful software.

These people are all clickbait drama queens. The truth is that Things Are Actually Pretty Decent™ but that doesn't make for a clickbait video.

justsomeuser
I disagree with his conclusion at the end (that libraries, package managers, virtual machines all introduce human boundaries that create worse end results).

I do believe we should strive to have as few dependencies as possible. But not zero, as we would end up having to build the entire stack from the hardware up (you would not create your own CPU instruction set).

I also think Conway would like the concept of package managers - a public evolution of the best tools you can leverage with very low personnel time cost.

The only time we can remove human boundaries is when there is only one human brain running the entire economy. In that unlikely case, the "boundaries" will still be there, but will be internally represented.

Certain abstractions are general enough to not hinder the design's you build on top.

"Good enough" solutions are often ok.

I think the driving force behind Conway's law is that of life splitting up its organisms into different species and individual organisms. You see "hierarchy" in the design of a tree's branches, lungs, and calories and nutrients travelling through different levels of a food chain. The end nodes collect and process and send it to a central node.

If a different mechanism had evolved, we would have a different "Conway's law".

mellosouls
An hour long video with comments turned off? A summary would be useful...
bentheklutz
Towards the bottom of the thread there is a pretty reasonable summary.

https://news.ycombinator.com/item?id=30738878

worewood
Yeah.. I can understand this block on political videos or other sensitive topics but on this? Well if you can't take the criticism perhaps don't post it on youtube.
Etherlord87
It's sad how bad Youtube comments are, but usually some good comments float to the top, and while neither high confidence of a comment author nor high number of likes of the comment give you a guarantee that it's right, often a comment will be a good lead - for example it could link to this HN conversation on the video.

I took a glimpse what people here say about the video and I find too many criticisms to consider it reasonable to spend time watching the video...

seanhunter
I would say just read Conway’s actual paper “How do committees invent”.

This is one of those talks where I’m convinced the person is being paid by the second. By the time we had got to the third iteration of “before I tell you the thing I’m going to talk about let me (define what a law is/critique Harvard business review/give an irrelevant sidebar about the language of technical papers and the fact ‘Datamation’ still exists”) I totally had lost any and all interest.

clarkdale
I'm curious in how to use Conway's Law to be more effective. I can learn from Brooks and Amdahls to improve software, but how to apply Conway's?

I think the closest thing is Bezos's API mandate. This is an attempt to flatten communications across a vast organization, with the upfront cost that each team build and maintain an API.

sbmthakur
The Morning paper did a summary of the paper mentioned in the video.

https://blog.acolyer.org/2019/12/13/how-do-committees-invent...

None
None
kirykl
lede is buried at the bottom of challenger deep on this one
FpUser
I think basic idea and architectures stay more or less the same for decades. Wrapping it out in some fancy terminology and calling those new does not mean "braking the law". It is like RPC vs CORBA vs DCOM vs /10000 other come and go standards which are essentially the same thing.
axiosgunnar
tldw?
0xbadc0de5
You care about this if:

- Software architecture matters to you

- Software performance matters to you

- Team and organizational performance matters to you

teddyh
It’s Conway's law.
smegsicle
conway's law: how it arises, why it's unavoidable, relationship w brooks' and amdahl's laws, how it explains complexity compounding over time in the form of integrating with past organization structure..
holyyikes
I really can't stand this guy. He's just so offensive and pretentious. He's not as smart as he thinks he is. He's not a professor. He can't give a "lecture." He's just some code monkey who worked on Bink Video, a product that 99% of his end users hate, and somehow he thinks he's John Carmack. He actually knows nothing about game development. He's swindled some rubes into giving him money for years on his Handmade Hero con. You could have had an entire career in the video game industry in the amount of time he's managed to produce some pile of crap that doesn't even have a complete gameplay loop.

You can safely ignore this guy. He doesn't know what he's talking about and thinks that if he makes his videos long enough people will just assume that he's an expert and there's got to be some good stuff in there.

HexDecOctBin
> You can safely ignore this guy.

Thanks for telling us this. We might have thought for ourselves based on the content, but now we don't need to.

holyyikes
When he can't make a video shorter than 50 minutes, figuring it out for yourself is a pretty serious time investment, but whatever.
dS0rrow
do you mind sharing why you consider handmade hero a con ?
houseinthewoods
created: 35 minutes ago
Laremere
I think it's fair to have opinions, but you are overstating your case here -

> You could have had an entire career in the video game industry in the amount of time he's managed to produce some pile of crap that doesn't even have a complete gameplay loop.

Including taking time to explain concepts, Q+A, and design choices made to educate that are then iterated upon (eg, doing 2d graphics from scratch then moving to 3d), he has accumulated approximately 28 weeks of full time work.[1] That's a very sad "career" in video games.

[1] Math: Using playist https://www.youtube.com/playlist?list=PLnuhp3Xd9PYTt6svyQPyR... with a calculation tool https://ytplaylist-len.herokuapp.com/ gives an average of 1 hour 40 minutes per video. The tool has a limit of 500 videos, so multiplying average length by true playlist length, then dividing by 40 hours per week gives 27.83 weeks: https://www.wolframalpha.com/input?i=%281+hour+40+minutes%29...

holyyikes
He doesn't just work on it on-stream or on video. I remember seeing some video a long time ago where he literally talked about what he was doing with it offline in between videos, so there you go. So much for your point.
ghostly_s
What is this clickbait garbage?
ladberg
Casey's name should be in the title! I think a lot of HNers respect him.
russellbeattie
Really?? I have no idea who he is, and this video didn't impress me at all.
seanhunter
With you on that. I had no idea who he was and definitely didn’t come out of that talk wanting more of whatever he is selling.

Incredibly trite and wildly overlong is how I would describe this.

aaaaaaaaaaab
Conway's "law" is such a cop-out excuse for shipping shitty software... There's no such law of nature that says you must ship shitty software. Enterprises ship shitty software because the average tenure of their developers is 2 years, and they have no incentive to improve things beyond what's necessary to pay the bills.
Ygg2
I mean at the end of the day it boils to few things.

- Entropy

- Capitalism

- Greed

To not be vague, user greed for features and less for performance cause increase in complexity. A complex system is by definition more chaotic and harder to optimize.

And Capitalism rewards doing just a bit better than competition. I.e. optimize your time on quickest things that gives most users satisfaction.

Jensson
> And Capitalism rewards doing just a bit better than competition. I.e. optimize your time on quickest things that gives most users satisfaction.

More specifically, capitalism doesn't reward the worker for doing software architecture better at all, just rewards the company. So the workers will naturally make the politically safe choice for software architecture and not care whether that is right, which means making it look like the org chart.

kortilla
Greed and capitalism don’t explain it. We get shitty software from governments and research labs that have no competition and that get no rewards for shipping extra features.
jchw
I’m fairly conflicted by this, because it’s fairly insightful, but it’s also probably overselling itself.

They expound quite hard on the idea that abstraction is inherently bad, and I feel this is a poor choice of words, and perhaps a mistake. Abstractions have cost. In many forms, and especially if the abstraction is a poor one.

However… they don’t seem to differentiate between good and bad abstractions. They seem to regard all abstraction as simply unnecessary, only used because our brains cannot deal with the entire problem at once.

I think you could make this argument to some degree but it breaks down when you start to see where abstraction is worth the cost. As an example, let’s say I’m writing a service that needs a key value store. If I make a simple abstraction for it, with well-defined properties for exactly how it should behave, how data consistency should work, etc. then implement multiple backends, this is a good abstraction. The reason for this is because software doesn’t have fixed requirements. Some users may be running a small instance of something on their desktop or a NAS or what have you, whereas others may be running software on gigantic clusters and would benefit from using clustered key value stores that are much more difficult to setup for valid, unchangeable reasons, even if we were to get rid of the abstraction and fully integrate a distributed key-value store right into our program.

Also, requirements change temporally. Clang could’ve implemented everything with no abstractions, but when Clang was created it targeted older and fewer versions of programming languages. The abstractions have cost, particularly when they are bad; but not having abstractions would’ve costed far more, IMO. Extending and reusing software that has little abstraction is very difficult because there’s very few reliable boundaries you can work off of. Adding a new operator in Clang is probably still hard, but I’m sure it would be harder if you carried forward all of the abstraction and folded it down instead. You need some kind of abstraction if you want cheap extensibility.

So my conclusion is basically, abstraction is not bad. Libraries are not bad. Engines are not bad. They simply have costs that are not accounted for properly, and may cost more than the value they provide in many cases. Intuitively, we know this; It’s basically the knee-jerk software engineers get when they get into a build-vs-buy discussion. You feel the jolt. The library has an amazing feature list, but something tells you it won’t be so easy. That’s the hidden cost right there.

chii
> You need some kind of abstraction if you want cheap extensibility.

so does the need for cheap extensibility comes first, in which case you build up the abstraction to enable it? Or does abstraction get built up first, which gives you cheap extensibility, then users start needing it afterwards?

What if clang didn't end up being as popular as it did, and the effort it took for the "cheap" extensibility was never needed as no one actually extended it?

jchw
It’s usually non-trivial to add later, so you have no choice but to essentially make a guess and hope it’s right. There’s basically no real alternative.

Of course, if it is trivial to defer an abstraction until later, then it’s better to do that. But it is almost definitely not for Clang operators, which cross cut almost every distinct unit of the program (lexing, parsing, etc.)

(This is also referenced at some point in the talk itself, though I can’t remember exactly where, but my take is that you can’t really fully understand the ideal architecture to solve a problem before trying to solve it, and in trying to solve it you must make architectural decisions.)

parksy
It is thought provoking at the very least. The perspective I saw his views on abstraction from is the perspective of the product itself. Imagine there is some perfect solution for the ultimate technology. It's a perfect whole, every operation has meaning and purpose, is as optimised as it could possibly be, the hardware and software blend seamlessly with no unnecessary redundancy and so on. Maybe such a thing is so advanced we'd barely recognise it woven into the very fabric of our biology and culture, who knows.

Our human limitations and abstractions limit our ability to approach this goal, so it's not so much that the idea of abstraction is bad because it's useful to actually get stuff done. But it's more that we should always keep in mind that abstraction itself is not the goal, it's just a tool we have to use to get there, because of human limits and I would also say natural limits (like computational efficiency etc).

Obviously such a thing of beauty is a hypothetical extreme. I can see this idea being useful on a smaller scale, like in organising a team, designing a product, or a piece of software, we should pay a lot more attention to where we draw boxes around our design space.

But I am kind of with you when it comes to the broader-sweeping implications. How far does this concept go, and what power do individuals have without some social techno-revolution to make any broader architectural changes to basically anything? I have a mobile phone that communicates via radio with centralised systems. These centralised systems exist to govern access and enforce payment for services, because that's how society itself is structured. Perhaps there's key value stores at various layers of this architecture. There is a store (again because money) where I can choose my own apps (because of human desire for freedom of choice) or view ads (money), all human constructs. (It's all becoming quite philosophical, another human trait.)

But is that the best possible solution for the problem of networked personal computer devices? Is mobile networked computing even an optimal solution for biological lifeforms, is it a technological evolutionary dead end or a precursor to some broader construct we're yet to discover, and if so then what do those superior structures look like from a design and organisational perspective? Perhaps on some alien world they have figured it out, and maybe their solution doesn't include radio communication or key value stores, or maybe it's something we wouldn't even recognise as technology, who knows.

What I take away from it is a call to think about why we're designing things the way that we do. Is there a way we can draw the design space differently and organise teams to consolidate or reimagine the problems they're solving in order to rule out truly unnecessary abstraction and keep only that which is necessary?

I can't imagine how it might look, but I am sure if someone figured out a more optimal way of organising hardware and software that served the infinitely variable needs of humanity securely and efficiently with less abstraction for the same or greater flexibility and that would continue to do so in perpetuity, that person would become rather rich (unless that solution made money obsolete, here we go again).

xelxebar
> They seem to regard all abstraction as simply unnecessary...

I would be surprised if the video author agreed with this statement. My understanding of his point is something more general and almost trivially true, i.e. a given set of abstractions can't solve problems the same way as a different set of abstractions. The strong version is a Venn diagram

         ┌──────────────────────────────┐
         │                              │
         │ Implementations expressable  │
         │ by abstraction set A         │
         │                              │
         │       ┌──────────────────────┼───────┐
         │       │                      │       │
         │       │ Implementations      │       │
         │       │ expressable by both  │       │
         │       │                      │       │
         └───────┼──────────────────────┘       │
                 │ Implementations expressable  │
                 │ by abstraction set B         │
                 │                              │
                 └──────────────────────────────┘
In particular, if you are still in the "this problem isn't completely, rigorously nailed down" phase, then building abstraction-hierarchy A implicitly means that you cannot explore some of the solutions available to abstraction-hierarchy B.

Said another way, abstractions cut down the space of possible solutions.

Cutting down the space definitely has large upsides. If you're trying to build a nuclear plant, we probably want to weed out the cake-baking solutions. For especially large and complex problems with horrendously large solutions spaces, abstractions function as a way of compartmentalizing some of that solution space into manageable sub-problems. However, maybe there is a better set of sub-problems?

After a year hammering on some software development project, you probably have a much clearer idea of the problem's nuances than when first starting. Wouldn't it be great if we could perform low-cost rewrites? If you can hold the entire source code in your head and cognate about it, that's probably even possible. Could any one human hold all of Firefox in their head?

One point from the video stands out to me: abstractions might just be a necessary evil. They are effective tools for helping humans cognate about complex problems, which involves limiting our ability to cognate about potential solutions.

Anyway, I am reminded of the surprising solutions found by genetic algorithms and their ilk.

holyyikes
He wouldn't agree with any statement that demonstrates how stupid his original diatribe was. That's his go-to defense. "You didn't understand my point because anyone who does will of course agree with me as I'm perfect."
Gravyness
I just wanted to say what a great work you did on that diagram. Thanks for that
Mar 16, 2022 · 5 points, 1 comments · submitted by bestinterest
blakehaswell
I really enjoyed this, I think looking the organisational structure through time is a good take that I haven't really seen addressed so clearly before.

I would have liked to see an exploration of the "through time" lens on some of the more micro code-organisation structures he talked about at the end like class hierarchies. It's definitely a common problem in legacy code—there's some idea you want to express but the existing structures make that very difficult and so you end up twisting your idea to fit, further ossifying the existing structures.

I've also seen cases where the organisation structure was changed to affect some change, but the existing code structure makes that so difficult that the software doesn't actually change to reflect the new structure at all, and instead the new organisation is just slowed down by coordination costs at the organisational level as well as different coordination costs at a technical level.

Mar 16, 2022 · 6 points, 0 comments · submitted by miltondts
Mar 16, 2022 · 8 points, 0 comments · submitted by nipung271
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.