HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Simple Made Easy

Rich Hickey · InfoQ · 856 HN points · 240 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Rich Hickey's video "Simple Made Easy".
Watch on InfoQ [↗]
InfoQ Summary
Rich Hickey emphasizes simplicity’s virtues over easiness’, showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path.
HN Theater Rankings
  • Ranked #22 all time · view

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
> "Who Says C is Simple?"

People who don't know what "simple" means and confuse it with "easy".

"Easy" things almost always lead to astonishing complexity.

Also it's easy to see just how complex C is: Have a look at a formal description of it! (And compare to a truly simple language like e.g. LISP).

In contrast some basic Lambda calculus language semantics fit 0.5 of a page in K.

+1 for simple is not easy, yet with enough thinking and ingenious ideas, it is achievable. Thank for the links.

"simplicity is the ultimate sophistication." -- Leonardo da Vinci

If I wasn't clear, I think that's fair! And certainly appreciate your blog entry. I think if anything my quibble is with our/some's obsession with "minimalism". Like, in one sense, it's less complicated to live in the woods, in another, it's tremendously difficult for some. And I'm not certain we should glorify it. It's just a different way of living.

For example, OpenBSD won't adopt ZFS. Won't adopt Rust within the OS. Won't use hyperthreading. All aren't even up for debate. They have their reasons, but also I do enjoy my creature comforts. Because at a certain level of additional ease does have its benefits?

It feels like the other side of the Simple Made Easy talk by Rich Hickey[0]. Yes, we shouldn't aggravate complexity, but also we need not make things unnecessarily hard on ourselves either for the sake of "simplicity" or "minimalism". It's a balance for the rest of us. I think the goal should be to strive for both easy and simple, and an OpenBSD desktop falls short re: easy for me. And, if the point is "It's simple/minimal!", I think that simplicity should have benefits (it's more composable..., it fits on a very small flash device,...). We shouldn't simply worship simplicity for its own sake.


> Won't use hyperthreading.

That's a switch.

> Won't adopt Rust within the OS.

It's in packages. Get it.

> OpenBSD won't adopt ZFS

Hammer2 would be preferable.

> That's a switch.

That's fair. I supposed what I was getting at is: OpenBSD seems... mono-maniacal(?), and that's one reason it remains niche.

> It's in packages. Get it.

Yeah, but won't adopt inside the base OS.

> Hammer2 would be preferable.

"...from a licensing standpoint..." Otherwise, ZFS is still obviously the state of the art. Most of us shrug and say "Whatever?" re: the licensing noise and run the stuff that works?

ZFS has two flaws: the license and that it's too intrusive.
Aug 31, 2022 · ABS on Cognitive loads in programming
it's going to take quite some time to read it all since it's long and deserves the time but since it's soliciting early feedback here it is: research and quote all the works done over the last 10 or so years by others in this space!!

The topic of cognitive load in software development is far from rarely considered and in fact it's been somewhat "popular" for several years depending on what communities and circles you participate it on- and off-line.

I'm surprised not to find any mentions to things like:

- the Team Topologies book by Skelton and Pais, published in 2019 where they cover the topic. Particularly of note here is the fact that Skelton has a Computer Science BSc and a Neuroscience MSc

- the many, many, many articles, posts, discussions and conference sessions on congnitive load from the same authors and connected people in subsequent years (I'd say 2021 was a particularly rich year for the topic)

- Dan North sessions, articles and posts from around 2013/2014 in which he talks about code that fits in your head but no more, referencing James Lewis original... insight. E.g. his GOTO 2014 session "Kicking the Complexity Habit" a quick search returns references to it even in articles from 2020

- Rich Hickey's famous 2011 Simple Made Easy talk

About "Cognitive load being rarely considered", I meant it in actual project work, not in the sense that the idea of applying cognitive psychology to programming is new.

I am sure the topic has been considered in an academic setting. I would not feel qualified to provide a good reading list on the topic.

This is also related to code quality, thus it will have a ton of relevant work.

Thank you for the links, in particular to Skelton, Pais, I will have a look!

>research and quote all the works done over the last 10 years or so by researchers in this space!!

I totally understand your point and appreciate you linking those resources, however I think it's important to remember that the author's post is from a personal blog, not from a scientific journal or arxiv.

Perhaps OP would've never posted this if he felt that his "contribution" wasn't novel enough. Additionally, there's a chance that the wording and tone the author used might speak to people who found the articles you mentioned opaque(and vice versa, obviously).

If the author, feeling the urge to write something up, had looked very hard for "prior work" instead of following the flow of their insights gained through experience, perhaps they would've felt compelled to use the same vocabulary as the source, which has its pros(forwarding instead of reinventing knowledge) and cons(propagating opaque terms, self censoring because of a feeling of incompetence in the face of the almighty researchers).

That's one of the great things about blog posts: to be able to write freely without being blamed for incompleteness or prior art omission.

On a different note, I think this may also highlight the fact that the prior work you mentioned isn't easy enough to find. Perhaps knowledge isn't circulating well enough outside of particular circles.

Look, of course there's lots of unexplored territory in software engineering, and we absolutely should continue to strive for better programming languages and abstractions. And we are! From reading this article, this author is looking in entirely the wrong direction for such improvements. It's not going to be some magic visual model that

One thing we should not expect is that new developments will be easy for us to learn, because we are already steeped in the current way of doing things. Supposedly, lexical scoping (what we're all familiar with) was extremely difficult to understand by early waves of programmers that were used to dynamic scoping (an insane way of doing it). They could have easily complained that this was just some new over-complicated abstraction and language construction that we don't need, but once you get over that hurdle and understand it, life actually becomes much simpler. New breakthroughs will hopefully be simple, but probably not very easy for us [1].

Many of this author's complaints about the current state of programming sound like they just haven't really achieved fluency in their programming language yet, and that they've been burned out on bad abstractions and have stopped trying to create (or just can't recognize) good ones. That's OK, this is all really hard to do! But it doesn't mean that everyone else is doing it wrong.


> It's not going to be some magic visual model

I wasn't talking about visual model at all. In fact, the word `visual` doesn't even appear in the article once.

A fair point, and I can't remember what my point was going to be (perhaps I meant to delete this sentence). But on the other hand, you never talk about the alternative to plain text, and I think every alternative I've ever heard of has been "visual" in some sense. In fact I'm not sure what other options there are other than "visual" and "plain text".

Perhaps what you were saying is that each developer should be able to choose whether they're working visually or in plain-text, with the underlying model being neither (binary? XML?). If you chose to work in LISP for the day, the computer would transpile the underlying model to LISP, and then transpile it back when you're done? I think this is the "magic" part, where some, what, AI does this for you? We're so far away from that being effective, and the benefits are just not there when you're truly fluent in the programming language. Every single instance I've ever seen of "Each developer can pick how it appears on their machine!" has made communication and synchronization between developers worse, not better.

Aug 01, 2022 · 1 points, 0 comments · submitted by tuxie_
Jul 30, 2022 · waffletower on Clojure needs a Rails
Clojure libraries target microservices with a precision that no other language ecosystem has. In essence, Clojure web services developers rail against Rails and other bloated, unnecessarily complected ( frameworks of the 90s. As a Clojurist I too rail against Rails. I don't think that expansive model fits the problem space. I have had painful experiences in the past maintaining Rails projects wondering why they didn't know of DRY. If there is a essence within Rails that you feel could be distilled into a lean Clojure model, build it out in a library and share it.
My central thesis seems to be getting lost here.

I'm not advocating for a Clojure Rails because I love Rails. I've never used Rails to be honest. I'm arguing open-source efforts are repeatedly being spent on trying to build the next web framework/library/toolkit (Rails) for Clojure and not much else, so it would be great if there was one, so we can get on with filling the gaps in the Clojure ecosystem.

"open-source efforts are repeatedly being spent on trying to build the next web framework/library/toolkit (Rails) for Clojure and not much else", really?

That's contrary to what most of us know. There are some efforts in the Web front, but not much. At least, the community is not paying much attention to these efforts.

Let's look at the list of community funded projects, e.g. those in Clojure Together: the only Web related projects funded were clj-http in 2018, ring, re-frame and reagent in 2020. None of these are Web frameworks, and the rest of the funded projects are not Web related at all.

Keeping things DRY is about the discipline of the developer.

It’s nothing specific to the language or framework.

> Clojure libraries target microservices with a precision that ..

I’m not sure what that means exactly, but no one is using Rails only for building out micro services. They’re using Rails to go from nothing to a production ready web application, with all the incidental complexity taken care of, in a very short time.

It’s been a few years since I was involved with Clojure ecosystem - what is the Clojure experience equivalent to say, the original Rails demo from back in 2005? The last I tried, all the parts seemed to be there, but much painful assembly/“composing” was required and not all the parts fitted which ended up producing a lot of awkward complexity needing desperate decomplecting.

I understand your worry, but I've had a quite opposite take on this.

I think we can agree that it's not that hard to find ANY job as an experienced developer. However it's much more difficult to find a great, satisfying job. For that you need to navigate around a lot of corpo-bullshit type of projects, and Clojure has served me well as a useful filter in doing that. My reasoning is that Clojure is niche enough that when company is using it, you can assume that it's due to a deliberate technical choice, and not just because of its popularity. That tells me two things that are symptomatic, in my opinion, of a healthy tech company culture: - tech decisions are made by engineers, not by top-level executives, - their conclusions and bets align with mine because we all see and agree on Clojure's edge over more popular solutions.

Admittedly, there's always a risk that someone just followed the hype and got out of their depth but I think this risk is relatively small, because Clojure's no longer a new kid on a block and choosing a tech stack is a major decision and usually done by senior tech leadership, hopefully less hype driven.

Of course, Clojure is no silver bullet and it's just a tool that gives you enough rope to hang yourself. Messy codebases are just as possible as in other languages, especially when the team is new to lisps that are very different from mainstream languages, but that's a nature of software development - you learn with the experience. I do cringe when I look at the Clojure code I wrote when I was just starting and wasn't fully grasping Clojure's way of thinking, but the more I use it, the more I come to appreciate how powerful it is.

Great intro that made it click for me: (Solving Problems the Clojure Way - Rafal Dittwald, 2019)

Having said that, no software project is ever complete and so isn't Clojure as an ecosystem. The tooling is constantly evolving and new patterns are emerging. What's great about Clojure open-source community is that everyone seems to share the desire to harness complexity and Rich Hickey has convinced each one of us at some point that the way to do it is through simplicity

Even within Clojure's community there's a diversity of approaches, and I think it's necessary to improve and evolve. The more recent trend, I've noticed is that the community is converging at Data Oriented Programming that's applicable in other languages as well, but has always been at the core of Clojure's mindset that is especially well suited for it.

Dropping some links relevant about DOP: (Rafal Dittwald, “Data Oriented Programming” 2022) - whole talk is valuable, but long so I'm linking to the most juicy snippets)

Moreover, Clojure has already grown past the threshold of being just a niche toy and has sufficiently big market that it won't die off anytime soon. When you study history of programming languages, you'll notice that it's enormously difficult thing to do for an emerging player, especially without big corporate backing. And Clojure is as grassroot as it gets:

It's been hinted at in this subthread a few times, but Rich Hickey's keynote about simple and easy is worth a listen/watch.
In case you have not seen it, there is an excellent talk by Rich Hickey on what "simple" is and how it differs from "familiar" or "easy": He proposes that "simple" is more objective than subjective.
Like everything else there’s a tendency to lean towards categorizing it into one binary category or another, I think this article makes some great points however about the topic of simplicity I think rich hickeys talk about “simple made easy” is really informative for thinking about design and building systems (it also presents a interesting definition of the two categories)

This is the kind of question that Rich Hickey (inventor of Clojure) dealt with here:
Mar 21, 2022 · cellularmitosis on Go 1.18
> As a general rule, if you're referencing the dictionary definition of a word to make your point, you're just playing semantic games.

Before dismissing this as silly semantic games, you should watch the talk which they were very likely referencing:

Feb 19, 2022 · 1 points, 0 comments · submitted by VHRanger
It definitely can be. I'm constantly trying to push our stack away from anti-patterns and towards patterns that work well, are robust, and reduce cognitive load.

It starts by watching Simple Made Easy by Rich Hickey. And then making every member of your team watch it. Seriously, it is the most important talk in software engineering.

Exhausting patterns:

- Mutable shared state

- distributed state

- distributed, mutable, shared state ;)

- opaque state

- nebulosity, soft boundaries

- dynamicism

- deep inheritance, big objects, wide interfaces

- objects/functions which mix IO/state with complex logic

- code than needs creds/secrets/config/state/AWS just to run tests

- CI/CD deploy systems that don't actually tell you if they successfully deployed or not. I've had AWS task deploys that time out but actually worked, and ones that seemingly take, but destabilize the system.


Things that help me stay sane(r):

- pure functions

- declarative APIs/datatypes

- "hexagonal architecture" - stateful shell, functional core

- type systems, linting, autoformatting, autocomplete, a good IDE

- code does primarily either IO, state management, or logic, but minimal of the other ops

- push for unit tests over integration/system tests wherever possible

- dependency injection

- ability to run as much of the stack locally (in docker-compose) as possible

- infrastructure-as-code (terraform as much as possible)

- observability, telemetry, tracing, metrics, structured logs

- immutable event streams and reducers (vs mutable tables)

- make sure your team takes time periodically to refactor, design deliberately, and pay down tech debt.

I agree with most of you points, but the one that stands out is "push for unit tests over integration/system tests wherever possible".

By integration/system tests, do you mean tests that you cannot run locally?

Most of that I agree with, I'm curious why you'd recommend unit tests over integration tests? It seems at odds with the direction of overall software engineering best practices.
Only read the transcript but I'm not getting most of it. I mean it starts with a bunch of aphorisms we all agree with but when it should be getting more concrete it goes on with statements that are kind of vague.

E.g. what exactly does it mean to: >> Don’t use an object to handle information. That’s not what objects were meant for. We need to create generic constructs that manipulate information. You build them once and reuse them. Objects raise complexity in that area.

What kind of generic constructs?

There's a really wonderful talk that I've recommended to almost everyone I've ever worked with called Simple Made Easy[1] by Rich Hickey. I also struggled to explain why I hated state so much. You can talk about races with shared mutable state but even single threaded code I found I couldn't stand it, that it made things harder to reason about and change. It's because state is complex, in the sense Rich discusses in the talk: State intertwines "value" and "time", so that to reason about the value of a piece of state you have to reason about time (like the interleaving of operations that could mutate the state).

I don't know if it's just me but I watched that talk a couple years into my career and it was like something clicked into place in my brain. It changed the way I think about software.


That time part is what you are wrestling with when you are battling with state. So it's natural to think about it that way. But there's also this somewhat dumbed down version of the argument: every piece of state a method reads is like an additional function argument and every state it writes an additional return value. What a mess.
This made me think: if we wrote object oriented code methods where all the members that we access are passed explicitly as parameters, as well as all the members that we modify (as out references), then we at least would immediately identify the real complexity of some methods! I'll try to do this, I'm curious to see how that would look like.
At some point you get too many parameters, so you pass a struct, which basically means that struct turned into an object. (one interesting difference is that you can pass more than one different struct to that function which is the equivalent of subclassing; but with more permutations possible. Thats actually interesting).
> I'll try to do this, I'm curious to see how that would look like.

That looks like a terrible mess.

The problem is not state, but messy access to it.

Everybody agrees that OOP was killed by getters and setters. But I don't think that there is much consensus about how long it would have survived without.

(I'm not saying that OOP doesn't have its place, but it has clearly turned from a way of structuring code to universally strive for into something to avoid if possible)

That's not a bad way of putting it. It reminds me of "It is the user who should parameterize procedures, not their creators."
This is insightful.

In some sense, the only distinction a "pure" function has over "non-pure" is that it declares all its inputs/outputs (as function parameters and result). We say that a non-pure function has "side effects", but all that actually means is that we don't readily see all its inputs/outputs.

Even a function that depends on time could be converted to a pure function which accepts a time parameter - this is conceptually the same as a function which accepts a file, or an HTTP request or anything else from the "outside world".

The trouble, of course, comes from the tendency of the outside world to change outside of our program's control. What do we do when time changes (which is all the time!) or file, or when the HTTP request comes and goes never to be seen again?

Or when the user clicks on something in the UI? Can we politely ask the outside world for the history of all past clicks and then "replay" the UI from scratch? Of course not. We cache the result of all these clicks (and file reads and network communications and database queries...) and call it "state". When the new click comes, we calculate new state based on the previous state and the characteristics of the click itself (e.g. which button was clicked on). This is a form of caching and keeping a cache consistent is hard, no matter what paradigm we choose to implement on top of it.

The real-world example of this would be React. It helps us implement the `UI = f(state)` paradigm beautifully, but doesn't do all that much for the `state` part of that equation which is where the real complexity lies.

There's no such thing as UI = f(state) in React. You may know that already, but it's UI = f(allStatesStartingFromInitialState). That way all state transitions are captured and all state changes are handled accordingly inside components taking into account component's internal state.
> State intertwines "value" and "time", so that to reason about the value of a piece of state you have to reason about time (like the interleaving of operations that could mutate the state)

Chapter 3 of SICP deals with this topic in great detail.

SICP being
I think I was at that talk. If I remember right the Sussmans were there as well and Gerry was the first to his feet giving Rich a standing ovation after that talk.
This is one of my favorite talks. It also helped things click for me regarding state. I try to use immutability wherever I can now and when there are unavoidable state changes, I try to understand and constrain the factors that could lead to such a state change. It's simplified things so much for me.
I enjoyed the talk and agree with it in many ways, but perhaps a contrarian stance will stimulate some interesting discussion. Here's the steelman I can think of against that talk.

Hickey's fundamental contention is that whether something is easy is an extrinsic property whereas whether something is simple is an intrinsic property. Whether something is easy is dictated often by whether it is familiar, whereas simplicity lends us the more ultimately useful property of being understandable.

To which I'll counter with Von Neumann's famous quote about mathematics : "You don't understand things [simple]. You just get used to them [easy]."

There is no fundamental difference between ease and simplicity. Simplicity (of finite systems) is ultimately a function of familiarity. There's a formal version of this argument (which is effectively that most properties of Kolgomorov complexity when applied to finite strings are defined by your choice of complexity function, even in the presence of an asymptotically optimal universal language. In particular there is not a unique asymptotically optimal universal language, that is the Invariance Theorem is overhyped), but the informal version is that both simplicity and easiness arise from familiarity.

Indeed the fact that there is "ramp-up" speed for simplicity suggests that in fact what is going on is familiarity. E.g. splitting state into "value" and "time" is one way of thinking about it. But I could easily claim that in fact "time" complects "cause" and "state." Rather state machines where the essential primitives are "cause" and "effect" are the proper foundations from which "value" and "time" then flow (you can think of "effect" nondeterministically, a la infinite universes, and then "value" and "time" fall out as a way of identifying a single path among a set of infinite universes). Likewise Hickey claims that syntax mixes together "meaning" and "order" whereas I would could just as easily say that "order" complects syntax and semantics!

What of the idea of "being bogged down?" That "simple" systems allow you to continue composing and building whereas merely "easy" systems collapse and are impossible to make progress on past a certain threshold? I claim that these are not intrinsic properties of a system. They are rather extrinsic properties that demonstrate that the system no longer aligns well with the mental organization of a human programmer. However this is dependent on the human! A different human might have no problem scaling it.

Now hold on, perhaps, while simplicity is perhaps dependent on the human mind and humans all more or less have the same mental faculties. Perhaps we can't find a truly intrinsic property that we call simplicity, but perhaps there's one that's "intrinsic enough" and relies only on the mental faculties common to all humans. That is, returning to the idea of "being bogged down," there are systems whose complexity puts them beyond the reach of all, or at least most, humans. We can then use that as our differentiator between "simple" and "easy."

To which I would reply that this is probably true in broad strokes. There are probably systems which are are so arcane as to be un-understandable by any human even after a lifetime of study. But at a more specific level, the way humans think is very varied. The ways we learn, the ways we develop are hugely different from person to person. Hence I find this criteria of "bogging down" far too weak to support Hickey's more concrete theses, e.g. that queues are simpler than loops or folds.

When you're talking about things like love, hate, and fear, sure maybe those are universal enough among humans to be called "objective" or to have associated "intrinsic properties," but when you're talking about whether a programming language should have a built-in switch statement, I don't buy it.

For the purposes of programming languages, simple is not made easy. Simple is easy. Easy is simple. The search for the Platonic ideal of software, one that relies on a notion of intrinsic simplicity, is a false god. Code is an artifact made for consumption by humans and execution by machines and therefore any measure of its quality must be extrinsic to the humans that consume it.

Sometimes X is simple. Sometimes it's not. It all depends on the person.

As empirical evidence of this I leave this final exchange between Alan Kay and Rich Hickey where the two keep talking past each other, no matter how simple their own system is:

> To which I'll counter with Von Neumann's famous quote about mathematics

I’m fairly sure this great quote is about mathematical “objects” in that you will never be able to truly “understand” or have a “real feeling” for more complex ones, like higher dimensions. Yet, by applying some simpler rules we can use and transform them, and after a bit of practice that will make it feel “close to us”, or “real”.

> Simplicity (of finite systems) is ultimately a function of familiarity.

I really don’t believe it would be true. Maybe I’m misunderstanding, but no matter how familiar I am with a given crud program vs JIT compiler technology, the latter will always be complex - but as you later refer to, I’m sure you know the difference between essential and accidental complexity. But in this view I would rather say that simple things are ones with minimal accidental complexity, while the easy-hard axis is about the essential part of that, that is irreducible.

>>> the way humans think is very varied

>>> It all depends on the person.

Based on what I've recently learned about neuroscience and optogenetics, I don't think there's much evidence to support this sort of relativism. On the contrary, many processes in mammalian brains have common mechanisms.

To explore more, this is a great podcast

Disclaimer: I am a complete layman on the topic, so please correct me if I'm wrong.

There is more to how we think than the underlying mechanisms, just as varying programs can be run on the same hardware.
This concept of "used to" vs "understand" reminds me of an interview with Feynman where IIRC he explains how can magnetism work at a distance to a layman person. He discusses about the "why" questions and how you keep getting deeper and deeper each time you ask "why". He concludes that his explanations won’t be satisfying for the other person, saying "I can’t explain this to you in terms you are more familiar with". I thought it was interesting and related. I’ll try to find that video.
It's the Feynamn "Fun to Imagine" video / series.

This bit is wher ehe says that about magnets:

I want to add to this that physics aims at this 'simplicity', i.e. being able to derive mathematical models ab initio, with the least amount of assumptions.

While the 'simplest' (in the physics sense) description of something is elegant, it can also be extremely hard to understand and work with. Maxwell's equations are used in engineering for a reason - and not their simpler theoretical physics underpinnings.

I appreciate the thought process here, and I'd want to spend more time thinking it over before a full response - though I think it maybe goes a little bit too into etymology for my taste! My immediate comment is that working memory is a measurable finite resource that developers have to use. The more entities they have to track in order to model the part of the system they're working on, the more usage of working memory.

Every bit of state creates potentially exponentially more possible entity states. So therefore limiting potential changes in state limits the amount of working memory necessary to understand the system. Its starting with "can't" and then building a "can" when necessary, which is a lot better on memory, comprehension and feeling safe/secure to make changes then starting with a collection of 10^n "can"'s and adding in "can't"'s.

First off I don't think this is quite the way Hickey thinks about the issue (though I suspect he would agree about the working memory part), especially with the comment about etymology /s!(it's a meme in Clojureland that every Hickey presentation and library must contain at least one slide on/mention of etymology) In particular Clojure as a whole embraces an ideology of "open systems" vs "closed systems" where we start with an infinite sea of "can"s and then add "can't"s as needed.

But that's immaterial to your main point, which is that adding state into the mix of things makes things hard. Which I agree with, but again to steelman the point, I could turn around and say that values allow for exponentially more possible values as well! When I see a map passed into a Clojure function I have no idea what could be in that map!

I think the main objection here which you are alluding to is one of "global" vs "local" reasoning. With a value I just need to worry about the body of my function, whereas with (global) state I need to worry about every function everywhere! But what if that's just a problem with our tools rather than an intrinsic issue? What if I had a tool that could automatically present all the mutable state of your system that is publicly accessible as a single screen and automatically link to different procedures that link to different parts of it? At that point I don't see much of a difference between state strewn everywhere and nice orderly values plumbed everywhere. In fact maybe it's nicer to have that implicit state strewn everywhere instead of having to carry around values which are irrelevant for the bulk of a function body and only relevant for a single part of a subfunction. What if it's all just a matter of not having the right IDE?

Working memory is definitely a hard limitation and universal enough among humans, but it's not clear to me it's a specific enough concern to convincingly justify certain programming language features which may just be crutches for inadequate visualizations or different educational backgrounds.

There's a lot to think about in your comments in this thread but I have a nitpick about functional programming style here.

> In fact maybe it's nicer to have that implicit state strewn everywhere instead of having to carry around values which are irrelevant for the bulk of a function body and only relevant for a single part of a subfunction.

I would call this an anti-pattern in FP. It's often a symptom of trying to replicate more imperative styles like OOP in a pure language. Threading mostly-irrelevant state through a bunch of different functions is a sign that your program is under-abstracted. If you think of all the function calls in your functional application as a tree, state should stay as close to the root of the tree as possible, kept in nodes it's relevant to, and the children and especially leaves of these nodes should be decoupled from it to the greatest extent possible.

> Threading mostly-irrelevant state through a bunch of different functions is a sign that your program is under-abstracted.

The problem is that often you do want fairly complex state in the leaves of the tree, but want very little of it in anything else. Web browsers are a classic example of this. Pure FP solutions such as Elm that completely eschew the idea of local mutable state require a lot more ceremony to implement something like a form (the classic thorn for Elm users). By forcibly moving up the state to the root, you sometimes end up needing to pull some fairly severe contortions.

E.g. the usual answer to move the state back up to the root in the land of statically-typed, pure FP is to express it in a return type (e.g. a reader or state monad, culminating in the famous ReaderT handler strategy in Haskell) or in the limit bolt on an effect system instead. The usual answer in impure FP is to accept some amount of mutable state and just rely on programmers not to "overdo" it.

But from a certain point of view, writing an elaborate effect system whose very elaborateness might cause performance issues and inscrutable error messages sounds suspiciously like trying to work around a problem in visualization with an over-engineered code solution. And from another perspective it feels a bit like a trick. If some function has a lot of state, then I would hope by opening up the definition of the function I'd see how it all works, but with an effect system all of a sudden I've split things up into an interpreter that actually performs the mutation and an interface that merely marks what mutation is to be done. It feels like I've strewn logic around in even more places than if I just had direct stateful, mutable calls there!

I will say plainly that I think there are situations in which mutability offers more elegant solutions than immutability, but I think most languages that offer it do it badly. I’m most experienced programming the Erlang platform via Elixir, and I think it offers a really nice midpoint between locality of state and purity. Within a process everything is immutable, and mutation requires sending a message to a process that will have a function specifying an explicit, pure state transformation from that message. Just about the only thing I don’t love about Elixir is the lack of real types.

I’m also very pragmatic and to the example of a web browser I would say, most applications are not web browsers. The overwhelming majority aren’t, in fact. I’ve chosen at this point in my career to mostly focus on enterprise software development, which I believe was Rich’s original field as well, and I’ve seen an enormous number of solutions with too much state cast about everywhere that benefit massively from centralizing the state high in the tree and really thinking through the data model carefully. So I stand by the principle I advocated originally, but it’s not universally applicable. It’s my belief that one of the core virtues of software development is knowing when to apply which principles.

> to the example of a web browser I would say, most applications are not web browsers.

I should've clarified. I meant developing a web page to run on a web browser, hence the form example.

It’s a good point. UI is a situation where the classic OOP-style frameworks work really well when they’re carefully designed. I think we’re still waiting on a model for doing that with FP that doesn’t rely on passing state deep down into an expression tree like React and its descendants encourage you to do. There’s stuff like Redux but it has its own problems.
You can "solve" global mutable state with an IDE until you bring concurrency plus parallelism into the mix. Then all bets are off for mutable global state.

In the case of Clojure, the map that you pass to a function is a value. It is guaranteed not to change underneath you and it can be freely shared with anybody.

Well to keep my contrarian hat on...

> concurrency plus parallelism into the mix

The hard part of concurrency is writing or writing+reading, not just reading, so an immutable map isn't going to solve everything. Instead the hope is that you confine the mutability to one place with various transactional guarantees (in Clojure's case, this is usually atoms) and then everywhere else you don't have to worry about it.

But then again why couldn't the same analysis be performed on mutable state? How are we sure this isn't just a tooling issue? If we knew exactly what parts of mutable state were being touched by what we could identify what critical sections needed various guards.

Taking my hat off and going back closer to my own views, I actually think Clojure's combo of maps+atoms are an arguable case where Clojure has in fact complected things together in a way that e.g. STM doesn't (and Clojure's implementation and use of STM has its own problems). Namely it's complected committing a transaction with modifying an element in a transaction.

To illustrate the problem, right now Clojure atoms basically give up parallelism entirely. If you have a map in an atom with two threads modifying different keys, then those threads have to come one after another. It's actually kind of a waste of resources compared to the single thread case because work done in one thread will be thrown away and retried if the other thread wins.

So if you want true parallelism when modifying different keys you can use a ConcurrentHashmap. But that then gives up atomic updates of multiple keys at once! (Or you can have nested atoms but that has its own problems and doesn't solve the inter-key atomicity issue).

It looks like an all or nothing proposition where you either get non-parallel but fully atomic map updates or parallel per-key updates but nothing in-between. These kinds of false dilemmas are a classic symptom of complection.

The way other languages with an STM system deal with this is to build concurrent maps out of STMs refs. That way you get exactly the amount of parallelism you can relative to the amount of atomicity you need. If you have a transaction that touches two keys at once then both of those keys are atomically updated together and those two keys form one unit of parallelism. If you have a transaction that only touches one key then you have per-key parallelism. If you have a transaction that touches all the keys at once then you just collapse to the normal case of a map inside an atom.

As far as I can tell the reason Clojure doesn't do this (but other languages have) is that its STM API is a bit clunky and missing some interesting combinators.

All this is to say that maybe indeed simplicity and ease aren't all that different if from one perspective atoms are simple and from another merely easy.

Those are well reasoned points.

I'm not going to delve into STM because that can be a whole book worth of discussion :). It's a fascinating universe, I've spent many hours (weeks, months?) exploring it, and I don't consider myself even close to an expert.

You are absolutely correct about the trade-off about atoms in Clojure.

Practically speaking, to start seeing retries you'd have to have a big number of updates going on at the same time. You can push a huge number of updates through a single thread. If you do have the need to do big throughput, you can explore not-so-idiomatic options like atoms-in-atoms, like you said.

IMO, the biggest unique benefit of combining atoms with immutable persistent data structures, comes from the fact that you can get unlimited number of consistent readers virtually for free. Any thread can look at (aka, deref) an atom, while the state/world keeps moving forward. I don't think any amount of tooling can solve that case for mutable data. A snapshot of a mutable data structure would require copying the whole data structure while using some sort of a locking strategy to stop writers while the read is taking place.

In production, I may only want one connection pool to a DB, and in that case global state is pretty much equivalent to passing state as an argument. Development in a Clojure REPL is a different story. I have one connection pool for the dev server, and a separate pool to run tests against. The test db is re-created from a template between each test run, without affecting the dev db at all. I can trivially have multiple test pools if I want to run tests concurrently.

I also have a separate service that the server makes calls to, which doesn't run on this server in production (it has its own production server), but does run in dev and test. Each dev/test system runs a separate instance of this service, which has its own separate connection pool(s), and setting this up was trivial.

Needless to say, failures are reproducible and meaningful. There is no mocking -- we test against real local services with real local DBs. (There are still some remote service calls which I'm slowly replacing, and some flakey, unavoidable remote dependencies in a few browser tests).

I didn't do anything special to make this possible other than naming the config files "service-name-config" instead of just "config". It is just the natural result of passing state in explicit arguments. The same is not true of global state.

To continue with my devil's advocacy...

> It is just the natural result of passing state as explicit arguments.

But nothing you've mentioned here is intrinsic to mutable state. It seems like all that's happened is you identified a part of your program that you wanted to be configurable and exposed a configuration knob. If for example you wanted to make it so that there is a test mode that where you want to prefix "test-" to every string written to the DB that would also probably involve a new argument somewhere. There's nothing here special about the mutable state part of it.

> But what if that's just a problem with our tools rather than an intrinsic issue? What if I had a tool that could automatically present all the mutable state of your system that is publicly accessible as a single screen and automatically link to different procedures that link to different parts of it?

The world needs this. I think Pernosco has a workable technical foundation, but the GUI is a debugger and I need a code exploration tool to "find my way" in big unfamiliar codebases. Encouraging developers to pick up and hack around in others' codebases is the only way to get enough eyeballs to make all bugs shallow.

> maybe it's nicer to have that implicit state strewn everywhere instead of having to carry around values which are irrelevant for the bulk of a function body and only relevant for a single part of a subfunction.

I think global state (which is unusually bad) or shared mutable state (which is omnipresent outside of Rust) is a mental overhead (more things to keep in mind). I don't think tooling can eliminate the overhead of worrying about moving parts, only make it faster to look up (and hopefully document) what touches each bit of state.

I personally think "encapsulation" is a misnomer. State is not encapsulated I OOP,it is just hidden. Proper state encapsulation would be to use mutable state internally for efficiency, but for that state to be unobservable externally.

OOP does unfortunately encourage introducing mutable state into the domain model. The canonical example being the back account, with a mutable back balance!

If you're going to reference a Rich Hickey take-down of OOP, I think "Are We There Yet?" is the most pertinent:

Of course, Simple Made Easy is excellent too, probably his most influential talk.

Time does not go away from the concept of value when you remove state.

What state takes away is access to a given value at any other time but now.

It's always now; every value is the current value and no other version of that value exists.

Not just you, I had the same experience. I rewatched it several times over the years and understood something new every time.
> State intertwines "value" and "time"

Reminds me of deterministic finite automaton. Is that what you mean?

Me as well but I was already sold on Clojure by then.
The problem I have with talks like this is that they sound fantastic on the surface. They almost sound self-evident! "Duh! I want to make simple things, not easy things! That was great!"

But where are the examples? Not a single example of something easy versus simple, or how something "easy" would resist change or be harder to debug. All of these concepts sound fantastic until you begin to write code. How do I apply it? It's a great notion to carry around, but I often wonder if this is just someone's experience/opinion boiled down to a really well done talk, and not much else.

If you want functioning, robust, maintainable software (or even better, software that doesn't require maintenance), then spend a long time modeling the problem domain. Build it as a system of types, a protocol, perhaps even a language (or at least an AST with semantics). Prove things about this model, particularly some useful things about soundness, consistency and (in)completeness. Learn all the funky symbols people use in the literature, learn about the strange tools you weren't told about in undergrad like dependent typing or higher-order contracts or CRDTs and lattices. Spend a lot of time doing this. Then, when you have determined the essential shape of the domain and nothing more, implement the software. At that point, the code almost writes itself.

I submit that if we did that, we would have excellent, elegant, simple software, but following the process would be incredibly hard. So hard, in fact, that it couldn't possibly be distilled into a conference talk.

Speaking as someone with experience with many of those things (PL theory/formal verification background), I don't think they're even close to being a silver bullet.

Coming up with the right abstractions and the right domain model is difficult (especially if you just sit down and try to come up with stuff, you're likely to get it wrong the first time around). Knowing about some of those things could help you come up with better abstractions, but it's neither necessary nor sufficient to ensure that you will.

Take dependent types for example. They allow you to express more program invariants or correctness properties in your types. But actually using them requires you to write proofs (at least, if you're using them to their full potential). And I do think that in general System F like type systems hit a nice sweet spot and are generally good enough for the stuff that you might actually want to handle on the type system level.

I've also run into similar "proof-like" situations with much simpler type systems like those of Haskell and Rust, where I was structuring my types to "make illegal states unrepresentable", but in the process ended up complicating my program due to having to match the structure of my program to the expected structure of the types. Sometimes it is nice to _not_ to have the type system enforce some of your invariants. (Such things are also doable with dependent types of course, but this is just an example of some of the tradeoffs involved).

You can also still have a shitty domain model even if you use all of those fancy tools. They just allow you to be very formal/precise about the domain model (and do perhaps encourage some more uniformity by making it more annoying to express ugly or complicated things).

Domain knowledge is very important. In the real world however by the time you finish this type of process the competition will have had the product out already. It may not be that perfect castle in the sky but it will work and if you have revenue you will have time and means to improve.
100% agree. It's a trade-off. Get product-market fit first and learn what you can about the domain. Spend enough time on architecture up front so you can easily pivot. That's all the simplicity you should care about at that point.

Once you get traction, you can start to afford to have the crazy vision. IMO, at that point it's easily worth the risk. A decent research team will probably discover something, and potentially extremely valuable knowledge.

If you were James Clerk Maxwell before he published his equations, how much would they be worth to you, especially if you had paying customers?

Our customers don't even want to pay for something that bespoke. They have margins to worry about.

So instead we've had to make a system which makes it less painful when bugs occur.

For us that means making it trivial to run older major and minor versions our software, and an automated update mechanism which delivers new builds to customers on-premise in less than an hour, updating the DB schema as well.

I don't think this excludes what the GP said, but this is super important as well. I think of it as second-order reliability: design your software not only so that bugs don't occur, but also so that the user can take practical steps to remedy bugs if they do occur.

(Also, as one of my past companies enshrined as an engineering axiom: "write software to be debugged". Most programmers write waaay too few logs. You know the print statements you add to your code when it's buggy, to track down what's going wrong? Well, do that all the time, and if there are too many then fix that problem with adequate tooling. If it's running on your customers' computers - whether servers or PCs or phones - then store them locally for N days / N logs and allow them to be submitted when a bug occurs. Stack traces - even good ones - are not nearly enough.)

By the time you're 20% in that process your competitor has already overtook the market.
To quote Thiel, "competition is for losers."
Counterpoint is that the Big Design Up-Front utopia didn't win in software, giving rise to Agile (for better or worse).
What sort of domains do you see as sufficiently well-understood and stable where this process is even achievable? A lot of my career has been in domains where we are exploring problems by building and shipping things to see what really works for users and customers. And other times there's domain volatility driven by changes in technology and competitive landscape.

Even for domains that are stable and knowable, I have to wonder what businesses can afford that kind of up-front investment before the first feature ships.

Compilers maybe?
Ooh, interesting! You're right, there's a class of domain where one can just push the real-world change to the edges of the system and ignore it. E.g., there's surely software that's mainly about complying with laws.

But even there, I suspect adaptation has to happen. Python's had how many versions over the years? Indeed, I could argue that it's one of the world's most successful languages precisely because it keeps responding to user need. Or look at tax software, which is going to change at least every year, and more often in emergencies.

So I suspect at best these other domains have a slower iteration clock. Which might be slow enough for the sort of formal modelling that is described. But then I think there's an open question: do other methods also work just as well with slow iteration clocks?

I've had largely the same experience as you, but I have seen some hints that real simplicity could be possible. If the domain is technology itself, there may be no underlying simplicity.

Ultimately, I think we have to make a trade-off between simplicity and easiness. The approach I outlined would be incredibly expensive because the tooling for that approach isn't quite good enough yet, and stakeholders wouldn't even understand it. They wouldn't realize that you were building a pitch for your product not as a PowerPoint deck, but as executable code!

A lot of our complexity today is from constructing software itself over layer upon layer of previous complex software (CSS, I'm looking at you), not due to the intrinsic "business cases" our software is meant to solve. Some of that complexity cannot be avoided, and some of it could be but at significant cost. To use an analogy, it's also cheaper to build a traffic light-controlled intersection, but overpasses are simpler.

Coincidentally, almost all of the tools I've seen that try to make simplicity cheaper come either from the Scheme/Racket/Lisp world that Hickey himself hails from or from Alan Kay and his sphere of influence. (The two groups have quite a bit of overlap, both in terms of ideas and even people.)

Sorry, I'm still not seeing how/when the approach you're hinting toward is practically valuable. So far it seems to me like you're pursuing one dimension of quality to the exclusion of others. Which is an interesting theoretical exercise, so if that's your jam, have at it. But it sounded to me like you were proposing something people could actually do.
Could you please elaborate on Hickey’s and Kay’s key ideas and how to try them hands on?

I know about Smalltalk (Squeak) so I guess that is the playground for Kay’s. Would just playing with Clojure do the same for the Hickey’s?

Easy things work until you have to extend them or do anything the least bit complicated. Think of SQL or most "easy" declarative APIs. Or even worse, ORM engines. Simple things are normally also easy to use, but you may have to write some more boilerplate and there's less "magic".

Steve wrote a simple CRUD API that gets some data and returns it. Bob tried to be clever and write a loosly typed declarative cluster fuck that nobody understands, but it's "easy" if you dont do anything interesting or useful with it.

A bit like haiku, wonderful when you read it, extremely hard to maintain conversations in haiku.

Or like an improv exercise where you have to improvise a dialogue, but only by using questions, no afirmations.

Can it be done? Sure, but not by most people, not in real time. Again, wonderful when you see it done right.

Talking in haiku: Wonderful when you read it. Too hard to maintain.

Improvisation. A constrained dialogue. Affirm? No. Question.

Can it be done? Sure. Most people struggle slowly. When right? Wonderful.

It's easy to stop calling a now-unused function when some behaviour is no longer needed.

The system is made more simple if you remove the function, though.

This is more so if only part of the behaviour of a function is no longer desired - the function becomes easier to understand when it's trimmed down, but it's harder to make that change.

The presenter is Rich Hickey. He is the guy who created Clojure. He basically designed the language around this principle (it is a very opiniated language). If you want examples, look at Clojure and its ecosystem where the ideas of Rich Hickey are held in high regard.
The Clojure language is the example. Basic data structures vs classes/objects, immutable vs mutable, lisp vs other languages, etc.
> They almost sound self-evident!

I think it's hard to provide examples since they would all be implementation dependent.

simple to me is a stage of the thought process that will become apparent only after putting in the extra work. It's not just applying "this 1 trick". Making it simple is its own unique challenge. E.g. my first iteration of an idea is always a mess. Then I rework it enough times to make it presentable (a state where it "works" and I can reason about it with others). But on the job nobody pays me to make things simple because that means spending another 10-30% of the budget on it. making things "simple" at work is nearly impossible to sell because people quickly through arguments at you like "perfect is the enemy of good", and few jobs give you a "definition of done" where making things simple is part of it.

Another reason why it's impossible is that the best time to rewrite a greenfield project or an MVP is before you add additional features. But at that point people will not allow it because the expectation usually is to build on top what you (they) invested in previously.

The point of simple vs easy is they exist on completely different dimensions. There's simple/complex, and there's easy/hard. Something can be simple+easy, simple+hard, complex+easy, or complex+hard. Obviously there's a sliding scale in each dimension.

Simplicity in a vacuum isn't a good thing. Ideally your solution targets the exact level of simplicity vs complexity required for your problem. Obviously you won't always hit or know the target.

The value in simplicity is greater composability. It's especially important for the building blocks of our systems - of which programming languages make a huge portion. It doesn't sound too controversial to say that it's easier to take multiple simple things and make a more complex thing, than it is to take a complex thing and distill it down to the simpler thing you need. I say this because regardless of what programming paradigm you adhere to, the "kitchen sink" unit of code is universally derided, be it god modules or god classes that does shit you don't need.

It's not that Clojure is all simple, all the time. There is mutable state in Clojure - atoms, refs, etc. They also have interfaces. And multimethods. And so on.

But the simplicity floor is lower in Clojure than most other languages I've used. More than those other languages, you can target the level of simplicity you need. And it provides for more complex elements if you need them. And in my experience, a lot of the time, you don't need those more complex elements.

Nov 25, 2021 · joelittlejohn on Abstract Clojure
If you have an hour spare, probably the best way to understand Clojure's main selling points is to watch this talk:

InfoQ list the Key Takeaways as:

- We should aim for simplicity because simplicity is a prerequisite for reliability.

- Simple is often erroneously mistaken for easy. "Easy" means "to be at hand", "to be approachable". "Simple" is the opposite of "complex" which means "being intertwined", "being tied together". Simple != easy.

- What matters in software is: does the software do what is supposed to do? Is it of high quality? Can we rely on it? Can problems be fixed along the way? Can requirements change over time? The answers to these questions is what matters in writing software not the look and feel of the experience writing the code or the cultural implications of it.

- The benefits of simplicity are: ease of understanding, ease of change, ease of debugging, flexibility.

- Complex constructs: State, Object, Methods, Syntax, Inheritance, Switch/matching, Vars, Imperative loops, Actors, ORM, Conditionals.

- Simple constructs: Values, Functions, Namespaces, Data, Polymorphism, Managed refs, Set functions, Queues, Declarative data manipulation, Rules, Consistency.

- Build simple systems by: Abstracting (design by answering questions related to what, who, when, where, why, and how); Choosing constructs that generate simple artifacts; Simplifying by encapsulation.

So Clojure is a language that embodies these principles in its design. It's a Lisp, which means that all code is constructed from a very regular expression syntax that has an inherent simplicity and can be quickly understood. It's a functional programming language that provides exceptional tools for minimising mutating state, and it favours working with a small set of data structures and provides a core api with many useful functions that operate on them.

I'd say the result is getting a lot done with a small amount of code, minimal ceremony, true reuse, and the ability to maintain simplicity even as your system's capabilities grow.

There's also transcripts of this and other Rich Hickey talks available:
The irony of thinking files and folders are too much for simple app and also praising a feature that is in direct relation to php’s MO of conflating codebase folder structure with requests’ path.

Edit: this reminds me, I was like this too at the beginning of my dev career, I also was completely in favor of this supposed “simplicity” of php, only much later, thanks to hickey’s nice talk I realized that I was confusing simplicity with ease.

In a simple application there is nothing wrong with your folder structure being related to the request path. Heck, such an approach is practically mandated for static sites.
Sorry, I don't understand your first point, even after reading it several times. I think I might have inferred what you meant by looking at the second (edited in) point, but I'm not sure.

Are you suggesting that it is bad that PHP applications often have a request path that relates to the folder structure?

In other words, are you suggesting that simplicity means an application should not have a request path that relates to the folder structure?

To give an example, are you saying it's a bad thing that loads /profile/index.php, rather than passing /profile through a single controller function to identify what code should be responsible for handling it?

The first approach actually seems pretty straightforward paradigm and it's what most new programmers would expect. Adopting a MVC/routes method is more complex and arguably overkill for a simple application.

If that is what you are contending, it should be said that PHP does not require this approach. Although it is often a preferred approach, because it doesn't depend on additional web server configuration.

I’m not suggesting, I’m saying that conflation is mother of confusion. Conflating request path with file path is not a great idea, especially for new developers that get a mental model of how web apps work that is completely irrelevant for the rest of their careers.
There's plenty of large PHP projects that adopt this paradigm. Is it really fair to say it will be completely irrelevant for the rest of their careers?

Also, let's not lose sight that this arises in a context of criticism of the model adopted for programming a simple form. This is just a simple one page form. More complex or abstract paradigms or design patterns is overkill.

Sep 08, 2021 · Jach on Maintain It with Zig
As always it depends. If you're thinking about preprocessor macros and operator overloading from C++, sure, those can be annoying, it's more to do with C++'s implementation and usage of them than the features themselves. You might want to try Common Lisp sometime; so much of the base language is made up of macros without which programs would be neither pleasant to read or write, and the language itself provides facilities to ask "ok but what function(s) are actually going to get called with this data" so that even not-so-local things like e.g. transparent logging of a call's input/output become visible if you need to know. But CL is not a language one can pick up in a couple of days -- albeit CL shops report success in getting new hires to be productive after a week or two of reading a book and the company code, which is a common onboarding time at many companies with any language.

Programmers notoriously conflate "simple" and "easy" (classic talk: and so I believe languages that are easy for a lot of programmers will also be perceived as simple, whether or not that's accurate.

I need to go to the bathroom. The simplest thing that solves my immediate problem is to urinate in my pants. I ate a bag of chips and now I have an empty bag to dispose of. The simplest thing that solves my immediate problem is to throw it on the floor.

So it's clear that "the simplest thing that solves my immediate problem", like simply adding a new int field to the most convenient table, can compound into an awful mess. But perhaps "simple" is not the right word here.

I like Rich Hickey's talk on simple vs. easy; we're both using the wrong word according to him. "Simple" means not intertwined or tangled; well-organized. "Easy" means "close at hand" or "familiar". We both mean "easy" here.

That being said, your examples of complexity fetish do indeed sound awful. Abstract classes, optional configuration files, environment variables and regular expression; we can agree those are awful. Those are neither easy nor simple. But the problem is that they're not discussions about the domain, they're truly unnecessary. Maybe that's all you really mean.

>We had to add something to the database the other day. Big argument. Should be one to many? many to many? what if this or that happens? what if requirements change? You know what - for the requirement we actually had it was solvable with a single integer field on an existing table.

Agreed about not inventing requirements, but questions about "how is this likely to change in the future?" are much closer to productive discussion. Discussions about one-to-many vs. many-to-many can also be the exact discussions software developers should be spending most of our time on (although don't get me started on the awful database designs most software has, so these discussions may be inane for that reason alone).

Yeah, except no one says, hey let me piss myself now and in version 2 I'll whip it out and then piss on the floor, and eventually I'll piss in an AbstractReceptacle.

Instead, developers ask themselves, will we want to piss anywhere in the future? Yeah, lets develop an abstraction to piss anywhere, but should we also plan for this urination to be sexual in nature? Better make sure we can use composition to mix in kinks whenever we'll need that, because surely we will, even though our piss implementation is toilet only and we barely have the time or budget for that.

Maybe we really should offshore all technical labor, too, because if developers had their way, they'd gold plate pissing and never develop the actual toilet, forgetting they had to eventually get around to that, too.

> I ate a bag of chips and now I have an empty bag to dispose of. The simplest thing that solves my immediate problem is to throw it on the floor.

Maybe that's the best solution for the long run, instead of designing and implementing a whole garbage disposal system from the ground up for only one piece of trash.

My problem a lot of software developers are trying to solve problems they don't have and never will. This consumes time and adds unnecessary complexity to their projects.

Prematurely designing for scale when I just needed to finish the beta version has been my engineering vice.
Haha, on the other hands a lot of developers never grow out of always developing betas and their only concept of programming is developing betas and then dealing with fires all the time.
For me too this took years and years to learn. It's a hard lesson that seems can only be learned by walking the road and learning from working on a particular piece of software for a long time, at least that's what it took for me.

I guess it's called experience to know when to design and when to just implement. Somebody wrote somewhere for example that if you're not going to need a particular piece of code in more than 3 different places, don't write a function for it.

As a newbie you would totally want to write a function for it, thus also making it harder to read the code as you would have to understand the function in order to see what it does in that context.

Also thinking in terms of "Do I really need this feature in future use cases?" is something I don't feel you can assess when not having the experience of already have peeked into those future use cases, where in many cases you will not ever need that particular function in more than this one place.

But can you learn how to design a reusable system without first doing it in the wrong places ? That's something that is hard to say, I don't know.

Could you teach somebody who wants to build complex, reusable components not to do it and just stick to simplicity ? How would one then know how to build those reusable systems where you need them ?

Maybe we should focus more on training both simplicity and complex design, but rarely you can do that when you are under pressure and working on real life software.

Haha, touché; I thought I had come up with completely unassailable examples of obviously bad choices but you've made a good point that a single piece of trash on the floor may occasionally be the best option. Engineering is all about tradeoffs, even in the extreme.
People are bad a predicting future. Especially when this predicting is done in 5 minutes before implementing something and not dedicated activity which includes interviewing of users and domain experts.

I've seen this many times: programmer is asked to solve a small and well defined problem. Instead programmer generalizes it and makes something more universal with the requested feature as a special case. More often than not nothing except this special case is used.

Or working on some new project they add a feature which looks useful in theory, but ends up being rarely/never used. It may look easy to implement initially, but over the years maintenance cost can be much higher.

People are amazing at predicting the future, and in some ways we are better at it than remembering the past. That's because we use the same machinery to do both. We partly remember the future, and predict the past. This ability breaks down with complexity and abstractness, as well as with novelty, all of which are involved in software (I can tell you that the sun will come up tomorrow, and where I should move my hand to catch a ball, but I can't predict all of the defects my software will have--though if it involves X.509 certificates, I can tell you exactly when a particular sort of outage will occur)
> In fact easy, in terms of getting something working, trumps difficult simplicity any day of the week. Easy is part of what makes an employer want to use your technology.

The whole point of the talk is that choosing "easy" (as in easy to get started) solutions while ignoring the complexity will be harder to maintain in the long run over choosing the "simple" solution that takes a bit more time to set up. Personally, this tracks pretty closely with my experience.

(Relevant slide from the talk:

Some clarity:

1) Experts don’t say fuck you user by intentionally doing hostile things for their own convenience or emotional satisfaction.

2) Elegance directly suggests simplicity and polish where simple means few:

That’s fewer shit in the code, which eliminates decoration and unnecessary conventions many developers cannot live without.

3) Beginners and weak insecure developers focus on composition. Experts focus on the end state.

Oh, of course, that's all true.

The difference between elegance and simplicity is when you start to talk to developers who learned just enough to be able to put patterns in their projects but not enough to know when to do so.

> 3) Beginners and weak insecure developers focus on composition. Experts focus on the end state.

That is one more way to say that experts use programming as a tool and that programming is not a problem for them and so the biggest issue they see is that the end result is the right one.

It is easier to focus on the end goal when you trust you have some kind of solution for any problem that can happen on the way.

On the other hand novice and intermediate developers focus on technical because this is the challenge they are facing. And they don't yet feel they can solve every problem they will face. You can't tell them to not focus on technical because it is useless advice -- they need to learn technical first before they can become experts and focus fully on the end goal.

The best you can do is to remind that the end goal is important and keep it on the back of their heads even when they are immersed in technical challenges.


As to "insecure" developers, I think there is something to it. Moving from purely technical problems to other kinds of problems (looking at big picture of the product, the client and the development team) requires a little bit of courage (don't laugh). It is easy to keep working within the same types of problems that you are comfortable with, and make illusion of progress by changing technologies, working with larger applications and so on.

I had a prospective client some time ago. They wanted me to help with their application. They had trouble delivering and additionally their application was unreliable.

So on the meeting with the director, architect and tech lead they asked me to start by upgrading Java from 6 to 11.

Mind, that this is discussion with a director that had some 40 devs and reached out to me personally to get help.

So I asked "Guys, do you really want to say that people were not able to deliver reliable application with Java 6? Or maybe the problems are somewhere else?"

The chart comes from Rich Hickey’s Simple Made Easy presentation. Watching that will provide missing context that may answer some of your questions.

Indeed. Thank for you stating this so clearly.

The "ease of use" and "familiarity" distinction reminds me of talks by people such as Rich Hickey who distinguish "simple" and "easy":

> Rich Hickey emphasizes simplicity’s virtues over easiness’, showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path.

This sounds like elitism in disguise to me.

Sure, there are some cases when you want quick and dirty and just glue some system together, but most production code out there has some more important business requirement than "understandable by the cheapest engineers out there".

For instance, if you're writing an account management system for a regional bank, you'll care most about the accuracy and longevity (including easy maintainability) of the system.

If you're writing a microservice for a fancy web app with global distribution you might care about latency (high latency drives down CVR), reliability (errors drive down sales and ads too) and sustained agility (you need to develop features fast to keep ahead of competition).

I think the second example covers most of what web and mobile developers do. I've definitely seen cases of over-engineered systems with many layers of leaky abstractions, but also many cases of under-engineered systems. Here are some well-documented maladies:

1. RYE (Repeat Yourself Everywhere) - You have the same business logic repeated in multiple places, because originally it didn't seem common enough or large enough to warrant DRYing up. This is obviously easier to read, since you don't need to dive deeper into more functions, but in practice the shared logic quickly diverges between the different cases, until it's very hard to specify what your system does.

2. "Let's just add an if branch here for this special case" - quick and dirty, as unclever as it can be, until you realize you need to deal with the combined permutations of 20 different branches. This is readable only in the very surface sense.

3. "Our junior engineers understand for loops better than map-filter-reduce chains, so let's use for loops instead of spending a few days teaching them": You can replace "for loops" with anything else that your junior engineers happen to know. The end result is not avoiding an over-engineered or "too clever" solution, but rather just avoiding a solution that is often simpler and easier to understand but just happens to be unfamiliar to your engineer. See also "blub paradox".

4. "This 500-line function is brain-dead simple and uses no fancy tricks". Likewise, it's easier for a junior engineer to write long-winded code and avoid thinking about even the simplest abstractions. And the code works! This doesn't make the code more maintainable or reliable though.

5. "Let's add ad-hoc retry with for loops and branches when necessary instead of creating an abstraction based on closures or let alone monads - this is just too clever". End result: reliability is added only after somebody complains about a certain functionality instead of being designed and baked into the service, and your service suffers accordingly.

There are many more examples, but the general gist is that in almost every corner of our industry there are important concerns that require us to take our engineering practice seriously. I think statements like "It's all just glue code", "It's all just plumbing" or "It's just a CRUD app" are not constructive for quality.

I wholly agree with you about simplifying things. I just think we should be mindful of the difference between "simple" and "easy" (as Rich Hickey famously put it[1]). What is easy for a junior developer to understand (because it doesn't contain any concept they haven't learned yet) may increase the absolute complexity. If you choose the "easy" solution here, you make the code seem _subjectively_ simpler to junior developers, at the price of _objectively_ (and measurably!) increasing cyclomatic, cognitive or state complexity.

Unless the "clever solution" requires understanding that is beyond what we can expect the reasonable average engineer to learn quickly, I prefer to always err on the side of reducing absolute complexity, and code that is simpler to read _for senior engineers_.


The problem 5 is sometimes caused by the management not allowing time for a proper redesign of a prototype. (Even if TDD is employed, which often is not because tests are not user deliverables.)

Hence why my statement "iterative development never works" as the managers will see the initial prototype mostly/partly working and start pushing features on it.

In such an environment, you need an experienced designer from day one - at worst you'll get slightly less performant but maintainable code due to extra abstractions.

If you get a design mistake... Well...

Great response!!! I wholeheartedly agree
You bring up some great examples, I think we do agree in principle. I must point out that I didn't really specify what simple was to me but it's quite a nuanced topic as your examples show. I just mean keep it reasonable given the context.

Why I think we do agree is that what I really love to see is actually idiomatic code. Does the software you're working in already use array map/filter? Then go right ahead. Could the types of developers working on the project be reasonably expected to learn it? Then also, go right ahead!

I prefer to use frameworks for commercial projects because of this reason. I can hire more talented developers because I can test to see if they know the idioms well, and I can also be happy investing in their learning of the framework knowing that it's going to be repeatably useful for them while working on the project, and can upskill the whole team over time. You're essentially defining a body of knowledge expected by picking a framework.

Similarly when you're architecting software you're essentially choosing what level of developer you're going to need to hire or train, and so if you're in that position it's smart to be cautious of increasing the burden of knowledge too much unless you're happy hiring for that.

I disagree, and agree with Mark Seemann's take[0]. If something is better, switch to it even if it causes inconsistencies. Over time you can refactor the old code.

You've limited your projects progression to if a billion dollar company has created a framework that is more productive such that they can be taught it in University. You'll never get hackers like pg who can code LISP if you go that route. You should hire good talent and then train them such that they understand, and hopefully keep them and they stay long after they work on your one project.


Assuming that you will be ever allowed to refactor properly rather than be put under immense pressure to fix other things right now.
I think that's the crux of the disagreement, I explicitly don't want hackers, I want professional software developers to execute a very reasonable software project effectively, which is not everyone's cup of tea.

There's plenty of room for hackers in the industry, to go make really cool stuff and make endless cash, and I love the architecture part more than the rest of it so for personal projects that's what I'm all about. But if I want to achieve delivery on time and in an expected format, I want zero hackers, I want career engineers. That is definitely a boring take for some, but that's just part of a maturing industry. Not every project needs to be an R&D project at the same time.

"Simple Made Easy" by Rich Hickey: "We should aim for simplicity because simplicity is a prerequisite for reliability. Simple is often erroneously mistaken for easy. "Easy" means "to be at hand", "to be approachable". "Simple" is the opposite of "complex" which means "being intertwined", "being tied together". Simple != easy. ..."
The primary source for this is

(I am not entirely sure I agree with its thesis or its applicability to Go, but since nobody had actually linked you directly to the concept, I thought it would be worthwhile to do so.)

I think Go can only be defined as "simple" in the New Jersey (worse is better) sense of simple, that is: "it is more important for the implementation to be simple than the interface."

A lot of things about Go are not simple. In my opinion a simple language is not about how simple it is to write a parser or a compiler for that language, but it's about how many different non-trivial and arbitrary pieces of information the developer has to memorize.

This is highly tied to the Principle of Least Astonishment[1] in language design: how many unexpected surprises does the programmer have to deal with?

With Go, you get quite a lot:

1. Go already has generic types. These are the magical maps, slices and channels. Everything else is not.

2. Even if you think #1 was also true for Arrays in Java 1.4 and no one was complaining, Go goes further: it already has generic functions like 'copy', 'len', 'min' and 'append'. Since you cannot properly describe the interface of a magic built-in function like 'append' using the Go language itself, this is not a standard library function, but should be viewed as an entirely new piece of custom syntax, like the print statement in Python 2.x.

3. Nil interfaces and interfaces with a nil pointer are not equal.

4. Multiple return values are a magical beast - they are not tuples and you cannot manipulate them in any useful way.

5. Channel axioms[2]. Possibly one of the more astonishing and painful aspects of Go.

5. Slices are mutable, unless you copy them. This can lead to some very surprising cases where a slice is passed down many layers below and then modified, breaking the caller.

6. Continuing the topic above, Go has neither clear data ownership rules (like Rust), clear documentation tradition on who owns the data passed to functions (like C/C++) nor a way to enforce immutability/constness (like C++, Rust or FP languages). This really pushes a lot of the cognitive overload to the developer.

7. Go modules are a lot better than what we had before, but are quite hard to deal with. The moment you need to move to v2 and above and start creating subdirectories they becomes rather confusing compared to what you would do in other package management system.

8. If a simple language is a language that allows you to write _simple programs_, and you follow Rich Hickey's classic definition of Simple[3], then Go is probably one of the LEAST simple languages available today.

tl;dr: I'm not saying other languages often compared to Go (like Rust or Java) don't have their own share of complexities, but I don't think Go should be viewed as a simple language in the broadest sense. It is a language with a rather simple implementation for our day and age (though Pascal was much simpler if we stretch this definition backwards).

[1] [2] [3]

It's hard to say from a single quote from a single person. I dare say most developers confuse difficult with complex.[0] His coding style may have been brutally simple, even if that meant very hard. He also could have been a bad programmer.

I often take a look at a problem from multiple perspectives in order to try and find ways of minimizing the number of special cases or minimizing the number of states in the (perhaps implicit) finite state machine. This is often harder than just gut-feeling my way through the most intuitive ad-hoc coding solution.

For instance, if something has an optional timeout, I strongly prefer to write it as a non-optional timeout that defaults to something absurdly large (but not so large as to uncover multi-billion-year overflow bugs in libraries I'm using), usually 100 years. Maybe that's the hard way of doing it, but it gets rid of special handling of the optionality. I'm sure some colleagues would describe this as "the hardest way" to write an optional timeout, but it objectively has fewer code paths to reason about and test. Some people really hate seeing code that doesn't treat the no-timeout case as a special case, because they just find it uncomfortable to switch perspectives. They really want to code it up as they most naturally think about it, not in the way that yields the least twisted code.

In another case, one of my colleagues wrote some minor error recovery logic for a distributed system. I politely told him that his solution had too many implicit states and would get stuck if messages were delayed between systems. I proposed a simple 4-state machine: ok, trying_to_resovle, resolved, and taking_too_long_to_resolve. But, he was the one originally assigned the task, I didn't have any real authority, and it wasn't worth a fight. He said the way he wrote it was "easier" and "more natural." A few months down the road, his solution got stuck and never alerted us that it was taking too long to resolve the error, because messages got delayed between systems. In an afternoon, I whipped up my original proposal: since the recovery action is idempotent, when you go into the recovering state, just blindly fire off the recovery action every x seconds until you either get confirmation of resolution, or after y seconds give up and alert the humans that the problem might not be resolved. As far as I know, my 4-state FSM solution is still in production years later. I'm sure the author of the original felt a 4-state finite state machine was "the hardest way to write it."

In a third case, we have a pretty slick internal publish-subscribe system, but the error handling is just level-based: the subscriber provides a callback taking a boolean that indicates if the publisher has just gone from "bad" to "okay" (true) or "okay" to "bad" (false). Publishers have an upper time limit of inactivity after which they'll publish out a size zero message, so if a subscriber doesn't get any messages in that maximum idle period plus some configurable leeway, then the subscriber needs to assume the publisher has died and go into error mitigation/recovery/alerting logic. It's a pretty simple two-state FSM. The start state is the "bad" state. Every message results in the current time being recorded as the latest timestamp, and if the current state is "bad", transition to the "ok" state and pass true to the health status callback. If there's not an existing timer, create one for transitioning back to a "bad" state. When the timer goes off, check the latest recorded timestamp, and see if you really should transition to a "bad" state and call the health status callback with false. Otherwise, calculate the next timeout based on the latest heartbeat and reset the timer. The problem is that it starts out in the "bad" state, so in order to handle the case of publishers being dead at subscription time, all subscribers need to implement their own timer logic, and a lot of subscribers either don't try to handle the case or handle it incorrectly. I spent a while trying to convince the main developer for the pub-sub system to switch to a tristate FSM: start, bad, and ok. If you use 100 years for the default time to transition from the start state to the bad state, you'll get backward-compatible behavior for subscribers that just assume their first health status callback must be their initial notification that the publisher is live. The other state transitions were all really easy to work out. I sent him an email with a pretty state transition table showing all 4 possible state transiions, what triggered them, and which transitions triggered which health status callbacks. It's really dead simple: 3 states, 4 transitions, and it greatly simplified code on the subscriber side and stopped forcing all subscribers to implement their own poor solutions, and it was 100% backward compatible if default parameters were used. He kept on pushing for various ad-hoc solutions with more implicit states and state transitions because his gut feeling solution was easier for him than thinking in terms of a 3-state finite state machine. We went through a couple back-and-forths with me pointing out flaws in his ad-hoc proposals, and him not pointing out any flaws in my FSM, but just complaining that it was "complex". But, he didn't really mean "complex", he meant "hard"[0] because he wasn't accustomed to thinking in terms of state machines. With the extra corner cases and implicit states in his ad-hoc proposals, his solutions were more complex by an objective complexity metric. But, I'm sure he'd complain that my 3-state, 4-transition state machine was writing it "the hardest way."

I also strongly prefer to put throttles with very high limits in cases where we don't think throttles will ever be necessary. When the network admins are yelling at you that you're killing the network is no time to have to code up a throttle instead of just changing a configuration. I've had people argue that putting in throttling logic is too complicated. When some middlewear daemon got absurdly slow, I've also had to tell those same people "The middlewear admins are screaming. If the middlewear daemon's memory usage hits 3.75 GB, we need to kill your programs to keep the middlewear from falling over." Sometimes a colleague complaining about complexity is really trying to simplify things to a dangerous degree.


I think of Rich Hickey [1]: helping others to understand the code faster by identifying and removing most of the accidental complexity.


This reminds me of a talk of Linus Torvalds. From this perspective, mastery can be seen as an acquired taste that came after doing a large volume of work.
Can you please link the talk?
It’s this one: The code he referred is here:
Ah; i had seen the TED talk earlier though i had not looked at the code example in detail.

Thanks for the links.

I'm a bit torn on the hooks feature since it seems like a great example of something that's easy but not simple [1], as well as going against React's prior principle of minimizing the API [2].

I'm glad the article also shows how to use the more traditional API as well, even if it is a bit more verbose.

[1]: [2]:

I disagree that hooks increase the API surface. They decrease the React API.

Before hooks, if you wanted a stateful component you had to use class components with all of their specific methods (didMount, didUpdate, shouldUpdate, render etc).

Hooks allow functional components to have state, and a functional component in and of itself has an even smaller API than the class. The React API vastly decreases. A single hook, useEffect, replaces at least 3 methods (didMount, didUpdate, willUnmount) that previously had to written out separately in class components. Each of those methods had to contain logic for ALL your state and side effects so they could very easily grow in scope and be responsible for handling many things.

Now each individual concern can be packaged up in its own useEffect call. If I have state that is relevant to the useEffect call, than I just build my own custom hook that binds the stateful data with the useEffect.

Because hooks are just functions built up from other hooks, I'm no longer constrained by the React API (its effectively gone) and I can so much more easily the abstractions I need that can be shared across components.

No, that is not my argument. I'm sorry if I didn't explain myself well enough for everyone to understand.

My point is that "readability" is composed by many factors, not just one. What is "readable" to some will not be "readable" by others. It depends.

For someone who knows Fortran, learning a Fortran-like language is easy (like C or JS). For someone who knows Common Lisp, Clojure is easy. But for someone who knows Fortran, Clojure is less familiar, hence the code will, on a glance, look less "readable".

The other factor is if something is "simple" by itself.

Most of this view comes from Rich Hickey, who wrote Clojure. He discusses "readability" or "simplicity" rather, in his talk "Simple Made Easy". If you haven't seen it before, do yourself a favor and watch it:

He'll explain it much better, and with further points, much better than I ever can.

this is a good example of a "simple versus easy" trade off [1] , i.e. this is something that is easier to get started at some expense of simplicity: it is over-complicated & hides complexity behind a simple interface, but not in a way that is reliable - so eventually things will break and then you'll be strictly more confused than if you did things in a less easy but simpler way - learning a bit about dependency management & setting up a reproducible environment for running your python script/app.

that said, it is an amusing hack to run the virtualenv'd python in a subprocess and then extract the environment vars and inject them into the already running python process.

some of the code could be cleaned up a bit -- e.g. it could be using to just ask for a uniquely named temp dir, some of the subprocess invocations dont appear to have error handling, etc.

for my python hobby project these days i only depend upon packages i can install using apt in debian stable. containers for isolation help too.



The goal of class based OOP is extension. When something is extended you then have the original and an extended derivative. What was one is now two, or more. That might be easy, but it certainly is complex. Complexity is to make many.

I prefer simplicity and predictability.

what would you recommend instead?
I take it you're unemployed then?
I’ve never had trouble finding work as a senior software engineer, but currently I’ve managed to escape the nonsense.

I have found from working in Java heavy industries that people most reliant on OOP tend be school educated developers with little or no capacity for self education. These tend to be the developers most concerned with job security. Very few (almost none) self-taught developers I’ve known openly embrace OOP as their preferred paradigm.

High insecurity among software developers is why I’ve grown to dislike writing software in the corporate world and why I was happy to accept a management opportunity doing something unrelated.

Jul 21, 2020 · 133 points, 30 comments · submitted by BerislavLopac
All of the Rich Hickey talks I watched (there are many!) are insightful and entertaining. He manages to talk about technical things on a abstract, sometimes philosophical level. I even re-watched some talks after a couple of years and got something new out of them.

One of the most entertaining ones I watched was "Spec-ulation". It is less general than some of the more shared/cited ones but really funny.

Anyone else interested it is available at
This talk had a profound effect on how I think about complexity, and taught me not to conflate easy and simple, 2 words in english that are often treated as synonyms. It really opened my eyes to how much tooling we use as devs (particularly in ops) that hides much of the complexity of a system. I’m much more dubious now whenever a tool comes around that makes X easier, and look for any added costs to complexity to see if there’s a hidden tradeoff.

I highly recommend giving the talk a listen to.

This talk turned an inflection point in my career. The physical metaphor of a braid and having that braid straightened out applies to so many areas of software. "Unbraidedness" is a very useful (albeit informal) measure of quality for me. It helps me detect when something's not right, as well as hinting how I can help straighten it out.
Even the informal concept is useful, but I suspect it could be formalized. I'd love to hear from a theoretical CS person about that notion.
I see this talk given as an example of best talks. I watched it twice. I'm obviously in a minority here, but I don't get it. I hear just truisms. Like: "It's better to be rich and healthy than poor and sick." I know it's hard to summarize a talk in a few paragraphs, but what big point did you get out of it. Honest question, I'm genuinely interested.
The core idea is to separate ease from simplicity and to talk about the implicit trade-offs of adhering to one over the other.

He claims that certain (often popular) tools and practices adhere to ease rather than simplicity, which introduces accidental complexity. And he introduces term „to complect“ which is now widely used in the Clojure community.

Many of the concepts and comparisons he talks about can be found in the design of Clojure and Datomic.

What is simplicity and how it differs from ease? I haven’t got a chance watching the talk yet.
Simplicity is described as being "disentangled" or the opposite of complexity.

I personally often picture complexity as a graph of nodes and edges:

- The more edges you add, the more complex the thing it describes.

- The more rules you can deduce about the graph (for example "it is a unidirectional circle-like") such as the flow of the edges, counting etc. the less complex it is.

The imagery in the talk describes it similar: Complexity is more knotted and interwoven. Simplicity is more straight-forward, clear and composable.

Ease is described as something being "near", also in the dimension of time. Something you already know or can learn/do quickly.

The talk goes on describing how simplicity requires up-front investment and time to achieve and also how ease and simplicity sometimes overlap and when they are at odds.

1) easiness is subjective, simplicity is objective.

2) simple code is easy to read, but hard to write.

If you're a programmer, and you're not surrounded by people conflating both words, consider yourself lucky. What does a coworker really means by "I did the simplest possible thing" ?

Ok. So it might be a language thing. Not being a native, easy and simple from pov of a customer is the same for me.

For example, for me, statement 1 is false. Simplicity is also subjective

I am one of the people conflating the terms. Are they used from the devs pov? Like what's easiest for you (add one more parameter to this function or another special case handled with its) might make things more complicated.

From the customer pov simple or easy is the same, or?!

It is a language thing, even for native speakers :) That is why he went to a lot of trouble to define each term in the beginning, and kept referring back to his opening definitions throughout the talk:

Simple = One thing not 'mixed - linked - folded' with anything else. That's why he says it is objective - if you look at something and see it's mixed up with something else, it's not simple (in his terms, now it's complex - eg many things woven together)

Easy = Near to you. Near in as you know it already, or you have it already and so on.

His talk is for the dev pov, but even outside of dev, simple does not mean easy all the time.

For example, (and something I am struggling with right now) it is simple to lose weight - eat less calories than you burn each day. Simple.

But I can attest it is far from easy.

Thank you for the reply. I'm starting to get it.
Wow it has been close to a decade since then. Is there a simple made Easy 2? He presents a number of grandiose ideas, it would be interesting to see what he thinks he got right and what went off the rails.
He indirectly recognized the importance of declaring the shape of data 6 years later by introducing spec, which to date still has big issues and screams "just use a proper statically typed language".
> spec, which to date still has big issues and screams "just use a proper statically typed language"

I think this statement is unfair.

I don't think there is a widely used statically typed language that is nearly as expressive and simple as spec. Also the opt-in nature of it retains the advantages of dynamic typing.

There's no "proper statically typed language." Every single statically typed language I tried, comes with certain drawbacks. However, I'd like to add - I do miss sometimes static types in dynamic langs, including Clojure. Bottom line - there are truly no silver bullets. That's why we keep inventing new programming languages and new paradigms. But of course, once we pick up one "religion," we feel compelled to yell at others: "you're doing it wrong!"
Everytime I read your name, you subtile spread some negativity :)

Maybe Hickey just didn't prioritized "types" as high as some other ideas to spend his time on. And in my opinion he focused on the right things and achieved something really great.

As general advice stay positiv, focus on the things you like instead of telling everyone what you don't like, it's better for your mental health ;)

That’s precisely what this is, including what he got right &c — “10 Years of Clojure”
I wonder what he meant by "Rules" (compared to "Conditions"), in his table where he describes "Complexity" vs "Simplicity".

Is it some kind of paradigm that exist in Clojure but not in procedural languages?

I believe conditionals is referring to if statements (and their ilk, switch/case/cond etc). Rules is referring to rule systems and/or more generally declarative knowledge systems. Things like core.logic in clojure (or prolog, datalog and minikanren in the wider world).

Stuff like this is not specific to clojure, however it would be harder to have an embedded rules system in your language if its not a lisp. You'd probably have have to resort to a string based DSL (something like drools in java).

Fantastic talk, covering the difference between “simple” and “easy”, and how (when you can’t have both) the former is preferable.

I find it interesting that Python, despite being widely described as a simple language, takes the opposite approach. The language isn’t simple at all [0], it prefers to make things easy. This preference even appears (in the contrapositive) in the “Zen of Python”: “Complex is better than complicated.”

As a specific example, Python 3.7 introduced dataclasses, making them dependent on type hints when they could have been completely orthogonal. The language design ignored this talk’s advice against “complecting” features.



> I wrote a #clojure program for logic A in 4 hours. I've been asked to rewrite it in #python because of some product requirements. It's been 3 days since i've started and still on the first 25% of it. Note: I'm using python for more than 13 years.

Yeah, no way that's true if his python and clojure knowledge are at the same level. That tweet sounds like what you see on r/clojure all the time, a cult.
It seems hard to say conclusively what is or isn't possible about differences in development time without knowing more about the problem domain. Since he mentions GIL in one of his tweets, it seems like his code must have involved concurrency, and Python and Clojure differ enough in this regard (to say the least...) that it seems believable that something that's easy in Clojure could be gut-wrenching in Python.
> That tweet sounds like what you see on r/clojure all the time, a cult.

Check any Clojure forum - clojureverse, clojurians slack, mailing-list. Talks from conferences. Clojure/conj , ClojureD, ClojureX, etc. Click around, check the profiles. Then you'd probably see that majority of Clojure users are not that young. Most of them come to Clojure after learning other, very often multiple programming languages. Many of them have tried all sorts of different tools before finally discovering Clojure.

You see it over and over again, people claiming that Python, and other popular PLs have little to offer in comparison to Clojure ecosystem. And your only explanation is "it's a cult"? Yeah, sure. Clojurists are just a bunch of losers who simply failed to learn Python. It is a pretty cool cult to be in, it is based on ideas endorsed by people like Guy Steele, Gerald Jay Sussman, Paul Graham, Matthias Felleisen, Brian Goetz, and many others.

Just a language that isn't yet used widely in production. I remember when Python was like that, there is even a relevant xkcd strip:

Gosh, I remember when JAVA was like that!

Clojure is used in production a lot, a big majority of users report using it for work. There's been significant shift from from the enthusiast-dominated community days of 10 years ago.

See the first graph at where you can see what portion of respondents have reported using it for work over the years.

> isn't yet used widely in production

What are you talking about? Walmart has built their receipt processing in Clojure. Apple uses it (afaik for payments processing). Cisco has built their entire security platform in Clojure - Funding Circle has built their peer-to-peer landing platform in Clojure. Nubank - the largest independent digital bank in the world and sixth-largest bank in Brazil been using Clojure extensively. There are many other companies very actively using Clojure. Pandora, CircleCI, Pitch, Guaranteed Rate, etc. It's even used at NASA.

It's a the third most popular JVM language after Java and Kotlin, and the most popular alt-js PL (if you don't consider Typescript as alt-js). It's the most popular language among PLs with a strong FP emphasis - it is more popular than Haskell, Elm, Idris, OCaml, Erlang, Elixir, F#, Purescript, and (recently) Scala.

Clojure is very ripe for the prime-time. The ecosystem is really nice. A lot more nicer than most other languages. It is an extremely productive tool. But of course skeptics be like: "but it's dyyying ...", "it ain't popular ...", "but all those parentheses ...", "it's a cult ...", etc.

After using a bunch of other programming languages professionally (for over fifteen years), I can confirm - Clojure is a cult. I am so stuck in it and have no desire to leave. Rich Hickey is a voodoo shaman or something. Don't you ever watch his talks and do not try Clojure! I have warned you!
Sadly it's a video/presentation, not an essay, but Simple Made Easy[1] is the single software argument that has made the most impact on me.



I’ve been programming for a long time, watched this presentation several times, done a bunch of other research, and still don’t know if I understand what this presentation is about. I fear that I’ve tried to apply these simple-vs-complex principles and only made my code harder to understand. My understanding now is that complexity for every application has to live somewhere, that all the simple problems are already solved in some library (or should be), and that customers invariably request solutions to problems that require complexity by joining simple systems.
> I fear that I’ve tried to apply these simple-vs-complex principles and only made my code harder to understand. My understanding now is that complexity for every application has to live somewhere, that all the simple problems are already solved in some library (or should be), and that customers invariably request solutions to problems that require complexity by joining simple systems.

Simplicity exists at every level in your program. It is in every choice that you make. Here's a quick example (in rust):

    fn f(i) -> i32 { i }      // function
    let f = |i| -> i32 { i }; // closure
The closure is more complex than the function because it adds in the concept of environmental capture, even though it doesn't take advantage of it.

This isn't to say you should never pick the more complex option - sometimes there is a real benefit. But it should never be your default.

You are correct in your assessment that customers typically request solutions to complex problems. This is called "inherent complexity" - the world is a complex place and we need to find a way to live in it.

The ideal, however, is to avoid adding even more complexity - incidental complexity - on top of what is truly necessary to solve the problem.

I think, the shift in programmer's perspective on where complexity should live is very much related to the idea of "the two styles in mathematics" described in this essay on the way Grothendieck preferred to deal with complexity in his work:
> still don’t know if I understand what this presentation is about

1. The simplicity of a system or product is not the same as the ease with which it is built.

2. Most developers, most of the time, default to optimizing for ease when building a product even when it conflicts with simplicity

3. Simplicity is a good proxy for reliability, maintainability, and modifiability, so if you value those a lot then you should seek simplicity over programmer convenience (in the cases where they are at odds).

I find the graph at the top of Sandi Metz's article "Breaking up the Behemoth" ( to be poignant.

If you agree with her hypothesis, what it's basically saying is that a clean design tends to feel like much more work early on. And she goes on to suggest that early on, it's best to focus on ease, and extract a simpler design later, when you have a clearer grasp of the problem domain.

Personally, if I disagree, it's because I think her axes are wrong. It's not functionality vs. time, it's cumulative effort vs. functionality. Where that distinction matters is that her graph subtly implies that you'll keep working on the software at a more-or-less steady pace, indefinitely. This suggests that there will always be a point where it's time to stop and work out a simple design. If it's effort vs. functionality, on the other hand, that leaves open the possibility that the project will be abandoned or put into maintenance mode long before you hit that design payoff threshold.

(This would also imply that, as the maintainer of a programming language ecosystem and a database product that are meant to be used over and over again, Rich Hickey is looking at a different cost/benefit equation from those of us who are working on a bunch of smaller, limited-domain tools. My own hand-coded data structures are nowhere near as thoroughly engineered as Clojure's collections API, nor should they be.)

There is a transcript here:
Rich belongs to the small class of industry speakers who are both insightful and nondull. Do yourself a favour if you haven't and indulge in the full presentation.
I still can't believe that I was actually there during that exact presentation but at the time it didn't have the impact on me that it seems to have had on HN as a whole. Maybe I should review it again, or maybe I'm just not smart enough / don't have the right mindset, IDK.
I think that the thing about that talk that struck a chord is that he took a bunch of things that people had been talking about quite a bit - functional vs oop, mutability, data storage, various clean code-type debates, etc. - and extracted a clear mental framework for thinking about all of them.
Rich Hickey seems to be a bit of a Necker cube. Some people i know and respect think he is a deep and powerful thinker. But to me his talks always seem like 90% stating the obvious, 10% unsupported assertions.
If you find 90% of his statements to be obvious, maybe all that means is that you're a deep and powerful thinker too?
Yeah, I think it depends on whether you're thinking about things from a SYSTEMS perspective or a CODE perspective.

Hickey clearly thinks about things from a systems perspective, which takes a number of years to play out.

You need to live with your own decisions, over large codebases, for many years to get what he's talking about. On the other hand, in many programming jobs, you're incentivized to ship it, and throw it over the wall, let the ops people paper over your bad decisions, etc. (whether you actually do that is a different story of course)

Junior programmers also work with smaller pieces of code, where the issues relating to code are more relevant than issues related to systems.

By systems, I mean:

- Code composed of heterogeneous parts, most of which you don't control, and which are written at different times.

- Code written in different languages, and code that uses a major component you can't change, like a database (there's a funny anecdote regarding researchers and databases in the paper below)

- Code that evolves over long periods of time

As an example of the difference between code and systems, a lot of people objected to his "Maybe Not" talk. That's because they're thinking of it from the CODE perspective (which is valid, but not the whole picture).

What he says is true from a SYSTEMS perpective, and it's something that Google learned over a long period of time, maintaining large and heterogeneous systems.

tl;dr Although protobufs are statically typed (as opposed to JSON), the presence of fields is checked AT RUNTIME, and this is the right choice. You can't atomically upgrade distributed systems. You can't extend your type system over the network, because the network is dynamic. Don't conflate shape and optional/required. Shape is global while optional/required is local.

If you don't get that then you probably haven't worked on nontrivial distributed systems. (I see a lot of toy distributed computing languages/frameworks which assume atomic upgrade).


His recent History of Clojure paper is gold on programming language design:

I read a bunch of the other ones. Bjarne's is very good as usual. But Hickey is probably the most lucid writer, and the ideas are important (even though I've never even used Clojure, because I don't use the JVM, which is central to the design).

That is the key: stating the obvious actually is hard and I think Rich does a beautiful job to translate the thoughts and feelings most programmer have into words. It actually gives a way to discuss and think about things (especially design and architecture) with others. I learned that there is no such thing as "common ground" or common knowledge magically and intuitively shared by all programmers. So if this already reflects your thoughts - even better.
There is a transcript here:

It is a good one, but the way it gets applied drives me a bit batty sometimes. Hickey includes some concrete examples of simple vs complex, and there exist people who will extol this talk and then pick the choice from the complex column every time. Really wonder what they’re getting out of it.
I don't have a transcript link at hand, but as far as videos go, "Functional Core, Imperative Shell" / "Boundaries" by Gary Berhardt is also a must-see (or must-read, hopefully).
Here's the video link:

Unfortunately there's no transcript on the official video

> makes you a lot more familiar with it

Essentially boiling down to the famous "easy vs simple" where something is "easy" if you're familiar with it and can draw conclusions via leveraging existing experience and "simple" is that it's maybe not familiar, but easier to become familiar with because it's not complex.

Blatantly stolen from Rich Hickey's "Simple Made Easy" talk:

I don't like lisp, not code in lisp; not even a line.. and I have read TONS of lisp stuff.


The #1 reason: The lisp people have a lot of cool things to teach.

One of the most obvious examples:

So, you can learn enough of Lisp or APL or oCalm or Haskell to tag along with somebody smart on the field (that use certain lang, maybe for very good reasons, maybe is just what he like) and understand stuff.

Most of the real gems are kind of easy to learn with the most basic understanding of a language.


A lot of times, is THAT kind of people that have the better insights of why certain lang matter.

Continue with the example of Rich Hickey:


You can translate a lot of ideas from a lang to other, as the most basic and simply benefits.

Is just the case that certain langs fit the minds/goals/niches better, so is there where to look for better answers...

I agree with the notion.

I find that a lot of less experienced devs I work with like to prioritize "ease of use" in API design over other things, such as testability, orthogonality, decoupledness, reversibility, etc. If an API is "easy to use" from a client perspective, they often deem it a good one. API ease of use is definitely important, but weighed against other constraints, which are more fuzzy and more about long-term maintainability. Sometimes making an API slightly harder to use (often requiring additional client knowledge about the domain before using it) is worth the trade-off against ease of use since it means it's easier to extend in the future.

It's definitely a skill to learn what helps long-term usability vs short term usability.

I often go back to Rich Hickey's talk about Simple Made Easy when thinking about this problem.

IMO "public" facing APIs should always be easy to use and only require the minimum information from the user necessary. An example of an outstanding public API would be nlohmann's json library for C++[0].

Whether that API is merely a wrapper for an internal API that is more testable (i.e. allows injection, etc.) or whatever is another matter.


I think there can be debate on what is "minimum information". I'd also say "easy" for one developer may be challenging for another developer if the domain of the model is foreign to them.

A lot of frameworks require up-front knowledge to work with. To some, that's not "easy", but it allows the client to do so much because what the framework is providing is not simple.

In other places, the API can be dead easy because what it's providing is so simple.

I think a good starting point is to integrate the principles described in the Simple Made Easy[1] & Hammock Driven Development[2]. These are overarching first principles that help in designing and writing code, but also in communication & team work.

[1]: [2]:

Hickley argues that simple != easy. "Easy" is inherently subjective, and only tells us how easy something is for the actor attempting that thing; if you know more about a domain, things related to that domain become easier. If you knew how to tie knot A and I knew how to tie knot B, we would both find our respective knots easier. "Simple" is more objective, insofar as anything can be. We could judge the complexity of knots in a rope based on how many times the rope turns over on itself, and that complexity would not be dependant on your knowledge of particular knots. In principle we should be able to agree on which knot is more complex even if we still have an easier time tying the ones we are familiar with.

Of course you can mount an argument that simple is used as a synonym of easy, because it often is, but I like the idea of simple being the opposite of "complex" and "complex" meaning "many-folded" rather than "difficult."

Perhaps, but it's a complete sidetrack. In the context of this discussion, "simple" was indeed being used as a shorthand for "accessible":

> Finally, this is going to be more accessible to more users to Mozilla since now they are using Matrix.

> I disagree, IRC is as simple as it gets. This might discourage some people from joining.

Fair point, but in this case it's really easy that users want, not necessarily "simple". Going back to the points made earlier in this thread, IRC is a lot simpler than WhatsApp, but WhatsApp has a much larger user base because it's easier to use.
> [...] easy is confused with being simple [...]

The first half of Rich Hickey's "Simple Made Easy" presentation does a great job of defining easy/hard and simple/complex axes and distinguishing them.


It has been discussed before on Hacker News:

So, my first answer is that you shouldn't have too, and if you do, you might not be writing proper Clojure code.

The most fundamental concept in Clojure, from the famous Rich Hickey talk Simple Made Easy ( is that your code should strive to be decomplected.

That means that your program should be made of parts that when modified do not break anything else. This, in turn, means you don't really ever need to refactor anything major.

In practice, this has held true for most of my code bases.

Now, my second answer, because sometimes there are some small refactors that may still be needed, or you might deal with a Clojure code base that wasn't properly decomplected, you would do it the same way you do in any dynamic language.

The two things that are trickier to refactor in Clojure are removing/renaming keys on maps/records, and changes to a function signature. For the latter, just going through the call-sites often suffice. The former doesn't have that great solutions for now. Unit tests and specs can help catch breakage quickly. Trying out things in the REPL can as well. I tend to perform a text search of the key to find everywhere it is used, and refactor those places. That's normally what worked best for me.

It helps a lot if you write your Clojure code in a way that limits how deep you pass maps around. Prefer functions which take their input as separate parameters. Prefer using destructuring without the `:as` directive. Most importantly, design your logic within itself, and so keep your entities top level.

Refactoring involves unavoidable, heavy code changes due to any number of unforeseeable circumstances like a change in requirements that forces a bedrock reabstraction because your previous solution was written to a different specification.

Maybe you want to add an archer class to your game that was melee-only and now your damage system needs to be reconsidered from scratch to be projectile-based instead of proximity-based. Maybe you're trying to move your 2D tile-based game into 3D gravity-based space and now your entire physics simulation has changed. Or you want to replace your AI enemies with networked multiplayer, lag compensation, and dead reckoning.

"Just write your code so you don't have to refactor it" is suggesting the impossible: that you somehow have zero unknowns from the moment you write your first line of code. A refactor that you can avoid upfront isn't a refactor nor what people are talking about when they bring up the challenges of refactoring.

Just chiming in because you seem to consider Clojure a great tool and want to spread the good word. But you're unintentionally damning it to suggest that Clojure's refactoring solution is to merely never incur significant change.

> Prefer functions which take their input as separate parameters.

In practice, it's better to avoid positional arguments and extensively use maps and destructuring. Of course, there's a risk of not properly passing a key in the map, but in practice that doesn't happen too often. Besides - Spec, Orchestra, tests and linters help to mitigate that risk.

> In practice, it's better to avoid positional arguments and extensively use maps and destructuring

We can agree to disagree I guess. In my experience, especially in the context of refactoring, extensive use of maps as arguments causes quite a lot of problems. Linters also do nothing for that.

Positional arguments have the benefit of being compile errors if called with wrong arity. I actually consider extensive use of maps a Clojure anti-pattern personally. Especially if you go with the style of having all your functions take a map and return a map. Now, sometimes, this is a great pattern, but one needs to be careful not to abuse it. Certain use case and scenarios benefit from this pattern, especially when the usage will be a clear data-flow of transforms over the map. If done too much though, all over the app, for everything, and especially when a function takes a map and passes it down the stack, I think it becomes an anti-pattern.

If you look at Clojure's core APIs for example, you'll see maps as arguments are only used for options. Performance is another consideration for this.

Doesn't mean you should always go positional, if you have a function taking too many arguments, or easy to mix up args, you probably want to go with named parameters instead.

For example if you have something like:

    (study [student age] ,,,)
And inside it calls a bunch of auxiliary functions where you pass either `student` or `age` depending on what those functions do, then someone says: "oh we need to also add an address", and have address verification in the midst of that pipeline. And instinctively programmer would add another positional argument. And to all auxiliary functions that require it. The problem with the positional arguments - they often lie, they're value depends on their position, both in the caller and in the callee.

It also makes it difficult to carry the data through functions in between. The only benefit that positional arguments offer is the wrong arity errors (like you noted). And yes, passing maps can cause problems, but both Joker and Kondo can catch those early, and Eastwood does that as well, although it is painfully slow. With Orchestra and properly Spec'ed functions - the missing or wrong key would fail even before you save the file. I don't even remember the last time we had a production bug due to a missing key in a map args.

But of course it all depends on what you're trying to do. I personally use positional arguments, but I try not to add more that two.

That's a bit of a different scenario then I was thinking.

In your case, you're defining a domain entity, and a function which interacts on it.

Domain entities should definitely be modeled as maps, I agree there, and probably have an accompanying spec.

That said, still, I feel the function should make it clear what subset of the entity it actually needs to operate over. That can be a doc-string, though ideally I'd prefer it is either destructuring and not using the `:as` directive, or it is exposing a function spec with an input that specifies the exact keys it's using.

Also, I wouldn't want this function to pass down the entity further. Like if study needs keys a,b but it then calls pass-exam which also needs c and d. This gets confusing fast, and hard to refactor. Because now the scope of study grows ever larger, and you can't easily tell if it needs a student with key/value c and d to be present or not.

But still, I feel since it's called "study", it feels like a side-effecting function. And I don't like those operating over domain entities. So I personally would probably use positional args or named parameters and wouldn't actually take the entity as input. So if study needs a student-id and an age, I'd just have it take that as input.

For non side-effecting fns, I'd have them take the entity and return a modified entity.

That's just my personal preference. I like to limit entity coupling. So things that don't strictly transform an entity and nothing else I generally don't have them take the entity as input, but instead specify what values they need to do whatever else they are doing. This means when I modify the entity, I have very little code to refactor, since almost nothing depends on the shape and structure of the entity.

Feb 23, 2020 · pansa2 on The Zen of Go
The article mentions “Simple is better than complex”, but not the next line of the Zen of Python, which I think tells us a lot about that language’s philosophy: “Complex is better than complicated”.

Looking closely, that line says “(not simple) is better than (not easy)”, or more clearly, “easy is better than simple”. Python definitely lives up to this - it’s easy to get started with, but if you look deeply it’s a very complex language.

Go’s philosophy is probably the opposite - that simple is better than easy. This is similar to the philosophy of Clojure, as explained by Rich Hickey in “Simple Made Easy” [0].


Don't forget this absolute banger:

> Obviously Go chose a different path. Go programmers believe that robust programs are composed from pieces that handle the failure cases before they handle the happy path.

A function which only returns an error can have its result ignored without any warning.

I don’t think that paragraph was referencing compiler guarantees.
It should refer to them or address them.

Since, given this, Go is no better than a language with unchecked expressions nobody handles...

There’s other reasons too! I’ve written the following:

    if v, err = func(); err != nil {
        nil, err
… and then went on to use `v`. Thanks to `v`s zero value being legitimate (and not nil like a pointer would be), the program continues on as if everything is okay. In case you didn’t catch it, I forgot the `return`.

Rust takes a much better approach with Result, where the return value is either `Ok(v)` or `Err(e)`, and there’s no way to access a meaningless value for the other possibility.

Your example doesn't compile:

Go doesn't allow values to just be referenced without having some use, e.g., JavaScript's `"use strict";` hack could not be done.

In general, I have never seen a bug caused by accidentally ignoring an error. It's a theoretical concern, but not a real world problem.

Don’t know what to tell you, I have personally made this mistake and not had this caught by the compiler. I haven’t used go in several years at this point, so it’s entirely possible this is a newly-caught scenario by the compiler.

Regardless, the fundamental point stands. Using tuples to return “meaningless” values alongside errors allows developers to mistakenly use those meaningless values.

I do wish Go would adopt sum types, but in practice errors like you describe are vanishingly rare. It’s mostly a theoretical problem.
Yeah, and this is so simple a change (compiler wise) and far more stronger a guarantee I don't see why Go didn't implement it...

At least then errors as return values would be solid.

Of course now they make programmers do all the extra error wrapping thing in 1.14 to pass "richer" errors...

Eh. It’s verbose but I like it. It makes me think about the code a bit when I have to write a descriptive error wrap. Kind of annoying I guess... I haven’t written ultra large go codebases though so ymmv.
>It’s verbose but I like it. It makes me think about the code a bit when I have to write a descriptive error wrap.

Having the compiler force you, as is my suggestion, would make you think even more -- or not be able not to think and skip the error check or miss it.

Totally off topic: if it isn't a bother, would you mind emailing [email protected]? I would like to send you a repost invite.
When you write your program you have to explicitly ignore the error. Ignoring it is a way to handle it.
Any serious Go shop would have errcheck as one of the linters in CI.
Apparently linters are bad and that’s why go literally refuses to compile if you have unused imports. But crap like this? No problem, flies right through.
Linters aren't bad (Go basically has one as `go vet` that checks for all kinds of "this is probably wrong", like the common "closing over a loop variable")

Warnings are bad, specifically when the warning is unambiguous (importing a package you aren't using is always wrong, though it makes debugging frustrating at times) The idea is that warnings that don't "stop" the build generally get ignored. Build most non-trivial C++ projects and count how many warnings flow past the top of your screen for an example of what they were trying to prevent.

What drives people crazy about Go is the laser-like focus of the designers on real world problems over theoretical problems.

Theoretical problem: Someone might mutate a variable intend to be constant.

Go designers: Then put a comment saying not to do that.

Real problem: People ignore compiler warnings.

Go designers: Then eliminate warnings.

Real problem: Exceptions can happen anywhere and often go unchecked.

Go designers: Then call exceptions "panics" and encourage people not to use them.

Theoretical problem: Someone might ignore an error return value.

Go designers: Let paranoid people write linters.

Etc. etc.

I can emphatically confirm that this is not what annoys me about go, and what does annoy me are the real-world issues I ran into it through multiple pieces of production software developed with multiple teams of skill levels ranging from intern to senior.
Can you link to a write up? I’d like to read what went wrong.
It’s been at least three years so it’s difficult to do a real write-up. In a lot of ways it was death by a thousand cuts. But some things off the top of my head:

Having to rewrite or copy/paste data structures for different types, given the lack of generics. As I understand it, even Google now has tools that generate go source from “generic” templates. This is absurd.

Defer to free resources (e.g., closing files) is a terrible construct because it requires you to remember to do this. You have lambdas! Use them so that the APIs can automatically free resources for you, like Ruby and Rust. It’s insanely hard to debug these kinds of issues because you run out of file descriptors and now you have to audit every open to ensure matching closes.

Casting to interface{}. The type system is so anemic that you have to resort to switching over this, and now you lose type safety. Combine this with the compiler not caring about exhaustive switch statements. And combine this with interfaces being implemented implicitly, and it’s a mine field for bugs.

I literally had a go maintainer waffle on adding support to calculate SSH key fingerprints because “users can just read the RFC and do it themselves if needed”. This is an indefensible perspective on software development.

Despite “making concurrency easy”, having to implement concurrency patterns by hand for your different use-cases is nuts. I have lots of feelings here, most are summed up by

Tulpe returns feel like they were bolted on to the language. If I want to write a function that does nothing more than double the return value of a function that might error (insert your own trivial manipulation here), I have to pull out the boilerplate error handling stanza when all I want to do is pass errors up the stack.

This is the 5% that I remembered off the top of my head years later. All in all, the design of go as a “simple” language just means that my code has to be more complex.

Interestingly, the one time I introduced someone to Go without really "realizing" it (during a coding interview, got to pick the language I used), his first comment was actually that he liked how explict the error return was. (strconv.Atoi, to be specific). That pretty much sums it all up to me: `if err != nil` seems like annoying boilerplate, but then when you see stuff that's not just doing `return err` inside that conditional, you realize that it can actually be a benefit.
Yes, laser focus on real-world problems like unused variables (don't matter) or unused imports (matter even less).

> Theoretical problem: Someone might mutate a variable intend to be constant.

> Go designers: Then put a comment saying not to do that.

One can only wonder why they even bothered writing a compiler when comments can solve it all.

> Real problem: Exceptions can happen anywhere and often go unchecked.

> Go designers: Then call exceptions "panics" and encourage people not to use them.

> Theoretical problem: Someone might ignore an error return value.

> Go designers: Let paranoid people write linters.

Because that way it's even easier to ignore than exceptions, and that's… good apparently?

Also create `append` where not using the return value just right (with no help from the compiler but that's OK because comments are where it's at) doesn't just create a bug in your program it can create two or more, what relentlessly efficient focus on real-world problems.

> A function which only returns an error can have its result ignored without any warning.

It should perhaps be an error to not assign an error to a variable. Internally, Google has linters that enforce this.

It would be nice to have “elegance” brought into the picture too. Code is an art as well!
Unless your building a user facing frontend please keep art out of your code. Just like a bridge internal's concrete structure doesn't need decoration so does your backend's plumbing. Writing elegant and artful code that is hard to understand and debug does not make you a clever developer. Keep that stuff for the demo-scene or 99-bottles-of-beer. Go is an language for engineering software, not crafting.
I have seen uncharitable interpretations, but this one is one one of the worst.
What exactly are you picturing when someone says "elegant" or "artful" code?

To me, elegant code is DRY code. Elegant code is code with useful abstractions where needed, and no abstractions where they just complicate matters. Code that is succinct yet clearly communicates its purpose.

From the sounds of it, you have an entirely different conception of what elegant code looks like.

> Writing elegant and artful code that is hard to understand and debug does not make you a clever developer

The whole purpose, to me, of artfulness in code is to take unartful, hard to understand code and make it simple. What other objective is there?

Honestly if a solution is too hard to understand and debug then it is not elegant. Elegance is turning a complicated solution into an easier-understood one. Clever solutions could be elegant but they could also be shortcuts that are confusing and cause more harm than good. There’s definitely a difference.
> Unless your building a user facing frontend please keep art out of your code.

Honestly, I had rather art be kept out of the user facing frontend. (Computer programming, on the other hand, is considered art by some computer scientists.)

(I don't necessarily disagree, but...)

It's probably changed a lot since, but at least back in the 90s demo-scene code was absolutely 100% written for the result alone, even perhaps when it should perhaps have been say 75% for the sake of reusability. Imagine "decent" 90s game code quality, then dial the qualitynotch down a bit, since it will only have to work once on a well-defined machine anway.

I'm long since tuned-out. I do seem to remember Farbrausch making some waves when they started applying the concept of reusability and structure to their work in the early 2000s.

> Writing elegant and artful code that is hard to understand

You do not know the meaning of elegant. At least in context of programming and maths.

You have a depressingly narrow view of what "art" can mean.
This seems to have totally missed the first half of the comment to which you replied: _elegance_ certainly has a place in programming, just as it does in mathematics and many other disciplines. Things that are elegant are often difficult to conceive, but easy to understand being simpler solutions to a problem than something inelegant.
I just read Eric update quite recent on CSP.

"Translation from Python, which is a dreadful language to try to do concurrency in due to its global interpreter lock, really brought home to me how unobtrusively brilliant the Go implementation of Communicating Sequential Processes is. The primitives are right and the integration with the rest of the language is wonderfully seamless. The Go port of reposurgeon got some very large speedups at an incremental-complexity cost that I found to be astonishingly low. I am impressed both by the power of the CSP part of the design and the extreme simplicity and non-fussiness of the interface it presents. I hope it will become a model for how concurrency is managed in future languages."

> Go community’s practice for grounding language enhancement requests not in it-would-be-nice-to-have abstractions but rather in a description of real-world problems.

jfc, the arrogance of this asshole. Seems like a decent fit for Go though, considering that language’s history of ignoring PL research..

I mean, Go is awesome for containers, and it’s awesome if you have a lot of junior devs and a decent amount of churn.

But the amount of anti-intellectualism by big shots in the community is seriously depressing.

ESR might be a lot of things, but he’s not a junior dev by any standard. And experience reports are pretty consistent across the experience gradient—Go is useful for solving real world problems, and often the very abstract languages fail to do so (often especially those languages beloved by intellectuals). You can challenge the qualifications of the reporters with respect to their own experiences if you like, but that seems like a silly thing to do.

If PLT can’t produce languages that practitioners find useful, then PLT is at fault, not practitioners.

EDIT: Rereading my last paragraph, "PLT is at fault" sounds harsher than I intended it to. Mostly it just sounds like PLT is based on a model of software development practice that doesn't fit well with the real world. The model performs poorly, but PLT supporters like the parent commenter are (implicitly or explicitly) blaming contemporary software development practice for the mismatch.

I never said esr was a junior dev, and he’s obviously not :)

>“Go is useful for solving real world problems”

People repeat this like a mantra (you also hear similar from rich hickey’s most fervent acolytes in the clojure community), and I can’t for the world understand what it means...

I mean fucking BASIC can solve real world problems... I’ve spent ten years writing java and php to great success, but I'm still happy to never write in those languages again.

I even adore Elm, despite its annoying lack of type classes, but I respect Evan’s goal of avoiding complexity in the language. That argument holds up a lot better than That Rob Pike’s argument on types:

“ Early in the rollout of Go I was told by someone that he could not imagine working in a language without generic types. As I have reported elsewhere, I found that an odd remark.

To be fair he was probably saying in his own way that he really liked what the STL does for him in C++. For the purpose of argument, though, let's take his claim at face value.

What it says is that he finds writing containers like lists of ints and maps of strings an unbearable burden. I find that an odd claim. I spend very little of my programming time struggling with those issues, even in languages without generic types.

But more important, what it says is that types are the way to lift that burden. Types. Not polymorphic functions or language primitives or helpers of other kinds, but types.

That's the detail that sticks with me.

Programmers who come to Go from C++ and Java miss the idea of programming with types, particularly inheritance and subclassing and all that. Perhaps I'm a philistine about types but I've never found that model particularly expressive.

My late friend Alain Fournier once told me that he considered the lowest form of academic work to be taxonomy. And you know what? Type hierarchies are just taxonomy. You need to decide what piece goes in what box, every type's parent, whether A inherits from B or B from A. Is a sortable array an array that sorts or a sorter represented by an array? If you believe that types address all design issues you must make that decision.

I believe that's a preposterous way to think about programming. What matters isn't the ancestor relations between things but what they can do for you.”

It’s just your average, obvious complaint about the inflexibility of class hierarchies in OOP, with a slight misdirection at the beginning when he mentions generic types (aka parametric polymorphism) but for some reason that’s an argument against types?! He mentions polymorphic functions, as if they can’t be typed???

I mean I made the same mistake after three semesters of java at uni, but one semester of c/c++/python made me realize there was more to programming and I eventually discovered type theory, which makes Rob Pike’s claims seem odd at best.

For me personally (and thus anecdotally) PLT has been a boon in most aspects, even though I have to deal with imperative or object-oriented languages from time to time. it’s just such a drag...

> I can’t for the world understand what it means...

It means Go performs well on real world projects. People feel productive, the language, tooling, and ecosystem get out of the way. You (and most PLT advocates I've encountered) seem to be evaluating languages on their inputs/features (presumably because you believe axiomatically that certain features--e.g., type systems--have a huge effect on the success or failure of a given software project) while the "useful for real world problems" view is about evaluating languages on their outputs. The latter view is harder to measure objectively, but it accounts for everything (e.g., syntax, type system, tooling, performance, ecosystem, etc) in correct proportion (no axiomatic beliefs).

Many PLT proponents generally seem to struggle with the notion that languages are successful when their model predicts that they shouldn't be. For example, many PLT proponents believe type systems strongly predict the success of a language, yet languages with very sophisticated type systems which are much admired by PLT proponents do poorly in the real world and languages with very flat-footed type systems (e.g., Go) do relatively well.

Either the qualitative data about these languages are wrong (e.g., contrary to the qualitative data, Haskell actually makes for more productive software development on balance than Go), or these PLT proponents' whitebox model is wrong. My money is on the qualitative data.

> You (and most PLT advocates I've encountered) seem to be evaluating languages on their inputs/features (presumably because you believe axiomatically that certain features--e.g., type systems--have a huge effect on the success or failure of a given software project) while the "useful for real world problems" view is about evaluating languages on their outputs.

I don’t, so please keep you assumptions to yourself and don’t put words to in my mouth.

> in correct proportion (no axiomatic beliefs).

What is this based on?

> I don’t, so please keep you assumptions to yourself and don’t put words to in my mouth.

I'm hardly putting words in your mouth. You were expressing more-or-less exactly this sentiment in your previous post.

> What is this based on?

It follows by definition of output-based or blackbox evaluation. Evaluating the output of a system implies that you are evaluating inputs in proportion to their contribution to the output.

> I'm hardly putting words in your mouth. You were expressing more-or-less exactly this sentiment in your previous post.

Trust me, I wasn’t.

> It follows by definition of output-based or blackbox evaluation. Evaluating the output of a system implies that you are evaluating inputs in proportion to their contribution to the output.

I like this! It’s like pure functions/total programming, only not rigorously defined in the slightest.

It’s not an answer to my question though, HOW do you know that the results of your output/blackbox testing is correct?

> often especially those languages beloved by intellectuals

*self-proclaimed intellectuals

To be fair, it also took me years to realize how programming as tought in academia is out of touch with reality.

> dreadful language to try to do concurrency in due to its global interpreter lock

don't take it too seriously. GIL has its issues and it would be nice to see it gone but "dreadful" is an overstatement. Python wouldn't be so widely used as a back-end language otherwise. Concurrency is not parallelism.

On "it [Go] will become a model for how concurrency is managed in future languages." -- it is not as clear cut as it appears at the first glance: "Notes on structured concurrency, or: Go statement considered harmful"

goroutines (and the go keyword) are the primitive, just like async is the primitive for python. Something like gives Go the same kind of "nursery" concept and can be leveraged almost identically (modulo Go not having the "with" concept, but defer can play the same role)
there is no doubt that a "nursery"-like construction can be implemented in Go in the same way like any language with "goto" can implement structured loops. The point is in constraining what can be done.

There is a trade off: "goto" is powerful but it is likely to lead to a spaghetti code. "nursery" introduces constrains but makes it easier to deal with error handling/cleanup, back pressure issues such as

Dreadful may be a little strong but anytime I've tried to implement something like asyncio for a non trivial piece of code it becomes pretty obtuse. (imo)
I’ve found asyncio to be a very simple model to understand, using the aio libs: aiohttp, aiopg, aiobotocore, etc.

Basically just slap “async” or “await” in front of everything and understand that anytime there is a network connection being accessed, that method will release control of the main thread.

You just have to pay attention to where something might block the thread for any significant amount of time - heavy calculation or lengthy file IO

You can spawn a multitiude of async tasks on startup and have super basic “scheduling” by using asyncio.sleep with some jitter.

The only time I have seen the performance limits of a naive asyncio app reached was in an auth app that sat in front of every API request for the whole company, and even then it was an obscure DB connection pool management issue deep in psycopg2.

Surprised no one linked Simple Made Easy[0] yet.

Around 30 min, Rich Hickey describes how the opposite of simple is complex, and mentions the etymology, "to braid together".

"It's bad. Don't do it."


Of course, but it could also be said that we're just composing the scrapers with a common building block, which Hickey says it's good.
Jan 06, 2020 · 2 points, 0 comments · submitted by exrook
I disagree. I think C is simple, but not easy[0]. I actually see explicitness as a sign of simplicity. It makes things obvious. Folks always cite undefined behavior, but I can't recall a single instance over the past 12 years of this causing me a day-to-day problem. I'm sure it does bite people though, I just don't relate personally.


Advice someone gave me in a similar situation: coding requires Mise en Place like cooking. Mise en Place is setting everything up and prepping before you start cooking so everything is easy to access.

In coding, this means thinking through how you're going to structure it, have answers for what pieces of code use service calls and what needs to be its own modules. Think about the data and what it looks like: what fields and names does this have? Does it require a set or list collection?

Following up on that train of thought watch the Simple Made Easy talk and really try to internalize what he means when he talks about design.

> Since each transform needs to perform vastly different image operations, using an enum in this case would’ve forced us to write one massive switch statement to handle each and every one of those operations — which would most likely become somewhat of a nightmare to maintain.

I don't mean to pick on the author, but I've seen this line of reasoning a few times before. It's the same argument that has been used in the past to justify inheritance hierarchies in OOP languages. I used to believe it too. However, I don't think this is actually true. In fact, I'd argue the opposite: switch statements, if used well, are _extremely maintainable_. Even though a switch statement might handle many cases, it does not become more complex [1] by doing so. If we're concerned about the length of the individual cases, we can easily replace each one with a function call. Fundamentally, in the example from the article, we'd like to map a piece of data (the operation with its parameters) to some behavior (actually performing the operation). A switch statement is one of the simplest ways to do that.


What you describe is called the Expression Problem [1] in programming language design and there is no simple formulaic answer on which method is better. I think you have to consider many aspects of your code's current design and possible future evolution when deciding which approach to use. For example: do you expect to have more types of transforms, or more operations/method per type of transform? It also means you can't nitpick a limited tutorial for focusing on one approach vs. the other.

Fortunately swift (as well as Rust or Kotlin) has excellent modern toolbox that includes protocol conformance and algebraic data types so you can use either one.

Keep in mind that swift protocols avoid many of the pitfalls of Java\C++ school of OOP design you might have seen before that can only express "is-a" relationships.


Java and C++ have no issues representing has-a relationships.

The issue is developers not learning how to use their tools.

Agreed on all points. One of the main metrics I use to assess maintainability of code is 'how many places do I need to edit to make a change?' (within the same file or worse, in other files too), 'how easy is it to find those places?' and 'how easy is it to make a change in one of those places but overlook another needed one?' On pretty much all of those counts, a single switch statement will tend to beat an inheritance hierarchy.
Nov 25, 2019 · ISO-morphism on Relentlessly Simplify
> Simple != Easy

> For some, simple would be more like Haskell, while for others bash (until they need to understand old code). Each eye of the beholder can argue either way.

Certainly simple != easy, but I think in the second part there "simple" should be replaced with "easy". Simple is objective, while easy is subjective [1]. Haskell may be easier for some as they've spent more time with it, similarly bash for others. However, their simplicity, i.e. how many concerns are intertwined, how much global context is required to reason about a program, can be more objectively analyzed.

> The article talks about simplifying, though is more about discipline, something many find hard to find motivation and incentives for in this age of instant gratification!

Indeed, it takes discipline to maintain simplicity. Simplicity is hard. Complexity is easy. "If I had more time to write, this letter would be shorter."


Nov 22, 2019 · ds_ on The Danger of “Simplicity”
"Simple is often erroneously mistaken for easy. 'Easy' means 'to be at hand', 'to be approachable'. 'Simple' is the opposite of 'complex' which means 'being intertwined', 'being tied together'" -
Problem: Finely chopping food

Complex and Easy: Stick blender with chopper attachment.

Simple and Hard: Knife and cutting board.


Problem: Making a drawing

Complex and Easy: Computer and printer

Simple and Hard: Paper and pencil


Problem: Sewing lots of clothes (perfect stitches)

Complex and Easy: Sewing machine

Simple and Hard: Thread and needle


Problem: Software

Complex and Easy: Graphical User Interface

Simple and Hard: Command-Line Interface

In all your examples, the complexity is hidden in the underlying technology, which I think makes them less than ideal. Sewing with a sewing machine is usually both less complex and simpler than sewing by hand. If you count the complexity of the hardware and the operating system and compiler, nothing in development is simple.

For me the dichotomy is better is better illustrated by: I need to create a new class that, with a few exceptions, does exactly what an existing class already does.

The easy way is to copy the existing class and make the small necessary changes in the copy. The simple way would be to refactor and put all the differences in delegates.

> both less complex and simpler

Did you mean "easier"? Because complex and simpler are antonyms, so it seems kind of redundant to use both words.

> the complexity is hidden in the underlying technology

The complexity is there. Maybe not all get involved with it, but it's still there.

> Sewing with a sewing machine is usually [simpler] than sewing by hand

The technology is more complex. The operation is maybe on par, though I would think it's also more complex. I may be biased in that I've hand-stitched many times and I find it super-simple, but I'm still a bit intimidated at the prospect of learning the basic use of a sewing machine. For very basic hand-stitching, you just put the thread through the needle, and the needle through the clothes in some pattern. That's it. For the sewing machine, I guess you have to lead the thread through some parts of the machinery, select some stuff through the knobs, etc. I think there certainly is a need to know a bit on the construction and workings of the sewing machine to be able to fix issues that arise.

> If you count the complexity of the hardware and the operating system and compiler, nothing in development is simple.

Complex and simple are relative terms, after all. If you refer to the last example of CLI vs GUI, they both involve the OS and compiler, etc. so that cancels out and we can refer to one as simpler or more complex than the other just based on the differences. Now, if you compare software development to making a sandwich, then sure, nothing in software development is as simple as making a sandwich.

> The easy way is to copy the existing class and make the small necessary changes in the copy. The simple way would be to refactor and put all the differences in delegates.

I agree to that, and that also aligns with the examples I gave. The complexity is mainly in how the thing is constructed. Duplicated code adds complexity to how the program is constructed. When you want to make a change to the common code, you have to make the change twice, maybe with a few differences. That makes development of the program also more complex.

It's the same as a sewing machine, or a stick blender with chopper attachment. Their construction and maybe operation is more complex than their counterparts.

> Problem: Software

> Complex and Easy: Graphical User Interface

> Simple and Hard: Command-Line Interface

GUIs are easy for the specific things the programmers made easy, and potentially impossible for everything else. The moment you want something the developers didn't put in the GUI, there's no recourse other than writing your own tool.

Command lines are harder to begin with, but modern command lines give you a gentler ramp up to writing your own tools.

Same is true with the other examples, I believe. Simpler tools tend to be the more versatile ones.
I am yet to appreciate Rich Hickey's now famous "Simple Made Easy". While I agree with his points, I don't understand the significance of it. Simpler is easier than complex, right? Even the title said "simple-made-easy". What is the fuss about emphasizing "Simple is erroneously mistaken for easy"? They are not the same, but they are intimately related. Or is this an emphasis on relative vs absolute -- that relative simple can still be relatively not easy?

I don't think I misunderstood Rich Hickey, and I don't think I disagree. But I don't understand why people quote the opening sentence and feel so significant for them? To me, that is just a click-bate.

> Simpler is easier than complex, right?

Well, no. Complexity has an obvious price but simplicity does too. You have to work for simplicity, even fight for it. Think of code; it just somehow becomes more complex. You have to work to pare it back to what's needed.

I can't think ATM of better examples (and you deserve some), but no, simplicity does not come easy.

A nice phrase I came across: "elegance is refusal".

Until you find a good example, I challenge your understanding :)

Similar to my response to another comment, I suspect there is a switching of subjects. It starts with a problem, and the subject is a solution to the problem. Simpler solution is easier to understand and manage. A more complex solution is more difficult. Is there a counter example?

Try not to switch out the subject here. For example, one may propose to use a library to solve the problem by calling `library.solve`. And then one may argue that the simplicity of the code is actually more difficult to manage as one need trouble shoot all the details/bugs/interfaces with the library. We should recognize that the library itself is not the same as the solution. The solution includes the calling the library and its details/bugs/interfaces/packaging/updating/synchronizing etc. And these elements interwine to make the complexity. So the solution itself using the library is not necessarily simple. It is difficult exactly because of the complexity.

As you can tell, I am essentially making the same opinion as Rich Hickey, which is `simple-made-easy`. And it is very far away from the click-bate opening statement of "simple is often erroneously mistaken for easy". A more correct sentence probably should be "simple is often erroneously labeled by partial".

EDIT: To clarify, I am not saying a solution using a library is more complex. It depends. With a library, the solution is layered and delegated. The entire solution is more complex and more difficult to understand -- if one is to understand every byte of it. However, the layering means not all complexity need to be understood for practical reasons. So with proper layering and a good judgement of practicality, the part of the complexity that you practically need manage may well be simpler (and easier) by using a library, or not. It depends.

I don't deny your right to challenge, but tight now I can't give an example. I've just gone through months of my posts looking for one particular post that might clarify but I can't find it. Not being able to search your own comments is frustrating. I'll have a muse overnight.


Found it (thanks google): Simplicity was staring me in the face, it took weeks to find it.

( FWIW Algolia can search comments: )
Simple is easier than complex the same way that exercise is easier than chronic obesity. If you have the discipline to do the obvious that's great, but it takes willpower to create or do the simple thing. Oftentimes it's easier or more expedient to do the lazier easy thing in the moment, but you pay for it down the road. For example: I notice I'm doing the same calculation twice on the front and back end of my application. The "simple" thing to do would typically be to extract that logic to one place so that you don't end up having to modify it in two/five/twelve places down the road. But I'm already halfway through writing it, and the simplification will involve some non-trivial refactoring, so I take the easy route and write the same logic twice. It's easy for now, but will be complex when I have to change it down the road.
Modules are "simpler" than vectors because they have fewer axioms, but they are also much harder to understand. For example, not all modules have a basis, which can make them much harder to work with.

For background on the math, see:

Interesting analogy, but it's a little off.

The main reason modules are interesting is not as a generalisation of vector spaces, but because they are helpful in studying rings. Kernels of ring homomorphisms are ideals, which in general are not subrings, but they are modules - and of course every ring is a module over itself. So to study a ring R it pays off to instead study R-modules, since working with them is... you guessed it! Simpler.

Good luck explaining "simpler" with modules and vectors :).

Simple is defined as not to inter-wine. To understand an axiom is to understand how it "inter-wine" with other axioms to prove certain results. So fewer axioms necessarily results in more interwines, ie complex. I think here we are switching the subjects: from axiom itself to the results that we want to prove. If we focus on the simplicity of proving the results, the simplicity of axioms are irrelevant.

The way I see it, when there's already a lot of complexity inherent to the domain (eg, software design), it's nearly always much easier to add to the complexity than to find a way to reduce it.
Your answer makes sense and is illuminating.

It is not easy to keep it simple.

The problem here is not that "simple is not easy", it is rather "picking partial and sacrificing/neglecting whole". Since one is only part of a team and a part of the whole design/develop/use circle, the "whole" problem is not (necessarily) "my" problem, therefore it is easy to pick a simple and easy solution from "my" perspective. The "my" and "whole" can also be swapped with "now" and "future". "now" is here but "future" is uncertain.

Good points!


That's where "local complexity : global simplicity" tradeoffs come into play; well-defined boundaries (coherent interfaces) are key to striking the right balance.


Yeah, YAGNI (You Ain't Gonna Need It") and STTCPW (Simplest thing that could possibly work) are good rules of thumb.

Finally, as for "not my problem"?

IMHO (and IME, 21yrs in the industry), that's a dangerously myopic stance. Those who make the effort to expand their perspective beyond the scope of their immediate tasks and responsibilities are those whose skills, powers, value and influence show commensurate growth. By all means, be a good team player and do your (current) job to the best of your abilities, which includes efficiency and ergonomics and awareness of available shortcuts. But if you do this for too longbe aware of the compounding effects, not only on the larger system's technical debt, but also the limits this may be placing on your career.

My takeaway was that if we conflate the two, we tend to use familiar (easy) tools to solve our problems, but that learning a new tool (hard) could result in a simpler solution.

E.G, passing something to a legacy program in a language I'm unfamiliar with from a program I wrote in a familiar language is easier than implementing my solution in the legacy language, but it's not simpler.

The 'relative vs absolute' seems like a heuristic to distinguish the two. Writing a solution in a different language is easier to me, but I can tell on an absolute level that there are more failure points to that approach.

Thanks. I think I understand the background much better now. When we think easy, we always take the "my" and "now" perspective. When we think simple, we often take the wholesome point of view. Thus the need for differentiation.
I might be wrong, but I think the word you meant by "wholesome" is actually "holistic"
You are right, I just grabbed the words by the sound of it.

A better word is subjective and objective. Easy is a subjective word, while simple is an objective one.

Nice explanation. Python is a great example of this IHMO. It is a real struggle to get the Python programmers on my team to use any other language than Python.

Why? Because it's easy for them. But the solutions they create with it are highly suboptimal. They could be far more robust and expressed much more concisely and directly in other languages with more powerful type systems and better support for eg: functional concepts.

But they actually really think that because Python is easy for them, that it's "simple". It's not: it's incredibly complex.

Haha, I was thinking of that as I wrote it. My first language was C++ back in the day, then I dabbled in various languages for a while, and finally really dove into Python because there was a project I couldn't figure out how to write any other way. If I had to work with one of the languages I learned earlier, my first instinct would now be to write the solution in Python and pass it to the legacy program. Perfect example of what the speaker is warning of.
If you haven't seen this talk; watching it will make you a 10x better programmer. This is what I take for my definition of complex and it applies broadly in a very practical manner.
>watching it will make you a 10x better programmer.

That sounds wrong. Can we drop this rhetoric?

It's obviously hyperbole.
It needs to be rephrased into this:

"Watching this video will make you into a developer who is respected 10x more by their peers."

Would it?
No it's hyperbole.

However if you go from writing spaghetti code to something more structured (i.e. loosely coupled, however that is expressed in your language) then you're team mates will hate you less.

Well, no, but it will make you 10x richer.
Probably not, but it's fun to think about.

If respect is measured by an integer, going from level 2 to 20 is great. But if you have no respect, then gaining 10 times as much still leaves you at none.

If you are disrespected DON'T WATCH THE VIDEO unless you want to be disrespected more by a factor of 10!

What rhetoric? Are you confusing this with "the 10x programmer" meme?

Claims of becoming a 10x better programmer aren't claims about making one a 10x programmer. The former is about relative self-improvement and motivationally hyperbolic; the latter is about relative comparison to others, is often used negatively to belittle, and is detrimentally hyperbolic.

I would defensively be more hyperbolic and use a different number, just because 10x is tainted by stupid ideas in programming. But your intent was pretty clear to anyone paying attention... that's just a high bar sometimes.
> motivationally hyperbolic

It's such a ridiculously high number that it ceases to be motivational.

Also: try out Clojure (... the programming language created by Rich Hickey based on this principle).
Was 100x for me. My boss unfortunately did not agree with me.
I've measured between 2x - 3.5x for every 12 minutes of a Rich Hickey talk. What's even more staggering is this continues even for repeated viewings.
The speaker is Clojure creator Rich Hickey, but the talk is about a mental model for thinking about complexity.

Inherent complexity involves tradeoffs.

Incidental complexity you can fix for free.

"And because we can only juggle so many balls, you have to make a decision. How many of those balls do you want to be incidental complexity and how many do you want to be problem complexity?"

The article is about the former. I bet the latter dominates day-to-day line-of-business coding.

Highly recommend the talk, as other have said.

Simplicity is often a matter of perspective, a function of a certain perception of a complex subject and the set of expectations that go with this perception. There is no absolute in analysis and in modelling synthetic propositions from the atoms used by the particular analysis.

(E.g., we may analyse and model an action in terms of verb-noun or of noun-verb, with major differences in what may be perceived as "simple" in the respective model.)

> Simplicity is often a matter of perspective

Complexity was formally defined by Kolmogorov, using with Turing machines even. Hence, Simplicity is also objectively defined.

Referring to the above example of verb-noun vs noun-verb grammar: take for example the infinitive verb form. With the former (verb-noun) it's just the verb devoid of any context, simplicity in its purest, which is also, why and how it's listed in a lexicon. Looking at this from the noun-verb perspective, you've to construct a hypothetical minimal viable object, which will be also – as you want to keep things simple – the object every other object inherits from, the greatest common denominator of any objects that may appear in your system. By this, you arrived at the most crucial architectural questions of your system and its reach and purpose. While it's still about simple things, neither the task nor the definitions derived from the process will be simple at all. Nor is there a universally accepted simple answer, as a plurality of object oriented approaches may testify for. The question is on an entirely different scale and level for the two approaches. On the other hand, for a verb-noun approach, similar may appear for anything involving relations, which are already well defined in an object oriented approach. And, as you've arrived at these simple fundamentals of simplicity in your system, what may be simple or not in your systems will depend on the implicit contracts included in these definitions and how well they stand the test of time and varying use and purpose.
Later in the talk, he draws a distinction between inherent complexity (the focus of the article) and incidental complexity (which you can fix without tradeoffs). Tradeoffs can be critically important, but the latter kind of complexity probably dominates my day-to-day life. I find this oddly encouraging, in a free-lunch sort of way.

"And because we can only juggle so many balls, you have to make a decision. How many of those balls do you want to be incidental complexity and how many do you want to be problem complexity?"

Watch the talk.

On a related note, the late Patrick Winston strongly states in his MIT AI Course that simple is not the same as trivial. Simplicity is powerful.

Simple points may sound trivial and obvious, but simple things can add up to make something magnificent.

wouldn't say simple is the opposite of complex though? especially when talking about software systems or other systems in general. what i am thinking is that some complex systems can be made of very simple components.

the best example is our complex brain being made of simpler components working together. maybe the opposite of complex is chaotic? i don't know...

> maybe the opposite of complex is chaotic?

Cynefin would agree!

I imagine that a software system that is made of simple components can still be complex. So I'd still go for simple vs complex
A very big object can be made of lots of small objects, but that doesn't mean big isn't the opposite of small.
Simple systems can indeed be made of complex components; however it is a measure of interconnectedness. The key concept is that we can only hold a finite amount of complexity in our heads at any one time, and so if we can minimise that we can be more efficient and effective.

The analogy is a lego castle vs a wool castle. A lego brick is very simple and contained, and from this you can build wonderful structures; in addition if you wish to change out a portion it is easy to do because changing on part of the system (i.e. implementation) doesn't affect the rest so long as the contract between components is maintained.

Contrasting: should you pull on a thread in a wool castle it will affect other parts of the castle. A lot of software is like this, which makes it very hard to reason about.

And the Lego analogy works in particularly nicely considering just how much effort, precision and design work needs to go in to making the blocks simple [0]. This is a nice analogy for how keeping software components simple and making them interface cleanly is a difficult task.


"Interconnectedness" is also a measure of resistence to hierarchical decomposition (or factoring ax+bx -> (a+b)x); irreducable complexity.

One technique is redefining the problem, to smaller or bigger:

Work on only part of a problem, a subset, leaving something out. e.g. git's data model does not represent renames as first class constucts, enabling it to be disproportionately simpler.

Expand the problem, a superset, to integrate lower or higher level or associated parts that aren't usually included. Previously scattered commonalities may then appear, enabling decomposition.

I think it's important to note that 'simple' can be used as an epithet.
They key takeaway that you should strive to be a simple person, not an easy person.
I had a math teacher in primary school who used to shout with an exaggerated accent, "simple is not the same as easy!" She really wanted to drill the idea into our heads that just because you know exactly how to do something, doesn't mean that it will be quick or easy to accomplish.

Like, for a schoolchild, long division. The rules are simple, but given big enough numbers you'll probably mess up at least once. And then the same thing turns out to be true with algebra, geometry, derivation/integration, and on. It's not a bad mantra.

amboo7 is nice
> I had a math teacher in primary school who used to shout with an exaggerated accent, "simple is not the same as easy!"

I can imagine no more poetic description of the experience of reading Wolfram's A New Kind of Science.

Can you elaborate?
"It is straightforward to show that..." means that you could probably do it with your current knowledge, but it will take 6 dense pages, four false starts and about a week of focused work.
If you were Feynman you'd even call it "elementary"

'You' being personified here, rather than the general you.

Straightforward tends to suggest we don't have to have a bunch of meetings about it, because the right person either has the knowledge or we know precisely where to get it.

It depends on the context. I had the math professor lecturing her students in mind.
Like the joke about writing math textbooks.

Forgotten the proof? Not a problem. The proof of this is elementary and is left as an exercise for the reader.

I used to joke that when a solution was known to exist the problem was "trivial"; when a solution was not known to exist it was "nontrivial". A problem that's bloody well impossible is "decidedly nontrivial".
I’d suggest you watch Rich’s talk “Simple made easy”. [1]

It’s one of his main points that something like a language being “hard to approach” can be overcome by spending a little effort to learn it (as opposed to sticking with something like Kotlin just because its easy to pick up because it’s familiar). The benefits of learning the unfamiliar (in his case, he’s speaking specifically about Clojure) being that it allows you to write code that is much simpler to reason about.

I have no particular beef with Kotlin (or most any languages... right tool for the job and all), but I have lately become infatuated with Clojure and many of Rich’s viewpoints.


Eh, I've used languages that were "hard to approach", one of my favorites (Erlang) is one of those (I use quotations especially because learning all of Erlang's syntax takes about a day).

This is a misapplication of the presentation really. It speaks to a level above selecting a language and is really about the design of systems.

Picking up Kotlin or Clojure is not "harder to approach" by virtue of what's provided in this context, it's harder because Clojure syntax uses parenthesis.

Like that's literally it.

Clojure with the same exact constructs represented with more C-like syntax would, at the level the presentation speaks to, allow the same level of simplicity.

I think a lot of developers feel "It looks funny" is not a fair critique of a useful tool, but just look at Erlang vs Elixr. I love Erlang, much more than I like Elixr, but Elixr gained mind share in large part because it's Ruby-like.

Cognitive overhead is lower working with a language that at least "looks like", what you're used to, and more developers know C-like languages, thus a language like Kotlin is "easier to approach" but necessarily "easier" in the way the presentation talks about

> Clojure with the same exact constructs represented with more C-like syntax would, at the level the presentation speaks to, allow the same level of simplicity.

I don't think this is true. I think it would be easier for "most people," but definitely not simpler. Easy meaning close at hand, simple meaning one strand, one braid, independent, less context necessary. Clojure syntax is the AST of the program, right there in front of you, in literal notation. There are fewer special cases, fewer moving parts interacting. C syntax requires spinning up a virtual machine in your mind and executing multiple statements. C is easier because we've already spent the time and effort to familiarize ourselves with it, but it has more complexity. Compare a 954 line ANTLR grammar for C [1] with a 261 line Clojure grammar [2].

> Cognitive overhead is lower working with a language that at least "looks like", what you're used to, and more developers know C-like languages, thus a language like Kotlin is "easier to approach" but necessarily "easier" in the way the presentation talks about.

I would agree, using Rich's definitions of simple and easy, that Kotlin is easier for the majority of developers than Clojure. This follows immediately from the definition of easy.

> This is a misapplication of the presentation really. It speaks to a level above selecting a language and is really about the design of systems.

I would recommend Rich Hickey's talk "The Language of the System" [3]. The programming language(s) used are part(s) of the system and have an effect on its design. I don't think this is a misapplication of the "Simple made Easy" presentation, I think it hits the nail on the head.

[1] [2] [3]

I feel like this comment is throwing semantics in a blender and pouring it out into the shape you want... but I guess that's the thing about arguing semantics, it usually devolves to that...

So I guess I'll just keep my recommendation to Kotlin and you can keep your recommendation to Clojure

> I feel like this comment is throwing semantics in a blender and pouring it out into the shape you want

I don't think it is, though. But it is clear that you are arguing with absolute confidence about a thing you have never given a heartfelt attempt to try first. You are debating like a medieval 13th-century mathematician that Roman numerals are elegant and more comfortable to understand, and people been using them for centuries and no need for this Indo-Arabian numeral non-sense that Leonardo, son of Bonacci so passionately keeps talking about.

I don't want to sound patronizing (I guess I'm already are, although not intentionally), but let me give you an advice - never trust your inner skepticism, fight it, dig for the answer - why are you so skeptical about it. Progress pushed forward by individuals who continuously challenge their beliefs. And from what I can see - you are not a mere consumer of progress, you too, do want to be at the front line where it is being made.


Thank you for the laugh, I imagined you typing that last paragraph, reading it, and thinking you had said something pithy and being proud to share that hackneyed screed with the world.

Up until this point I haven't even shared my opinions of Clojure (which I've used) in absolute terms, did you realize this is all in relation to OP's description of "dorky languages", so I was speaking to OP's PoV as someone who probably doesn't use non C-like languages, not myself. Erlang, my pet language is plenty dorky, you seem to have confused "dorky" with "bad" or lacking utility.

But alas, let me just be straight forward, Clojure is bad.

A masturbatory aid for bored developers burning perfectly good time and money for their own overinflated sense of accomplishment and their quirky resumes.

Imagine being a language that literally lists it's half it's rationale as "our customers who won't let us run what we want, so we stuck what we actually wanted to make, on this JVM thing that they all know real well".

Clojure code bases devolve into contrived spaghetti so blindingly fast, but by god will the people writing it get off to how dense the code they're writing is while the decent into madness marches on, and boy will they enjoy how they're really sticking it to those stupid Java guys with no types... while 90% the code they interop with was clearly designed to be used in a typed setting.

And you can count down on a M-F calendar view how many days before the codebase will feature a different DSL for each programmer who's touched it which allows them to define complex business rules as a new sub-language instead of icky "normal" shudder code. Java did only a few things right, and no macros was one, imagine thinking undoing that is the right choice.

Clojure devs love to hold up the few high-profile successes and a bunch of no name success stories that are small enough to probably have served just as well by anything from Clojure to writing out Java bytecode in pico.

The funny thing is the most common successful cases actually went and tacked on a freaking type system!

Have they heard of F#? And if they're so allergic to types, good god why are you on the JVM and trying to interop with JVM code. If you're not trying to interop with JVM code, why Clojure? Why not Elixir or Erlang, which kick Clojure's ass at the other half of the rationale it always gets, concurrency and immutability.

Actually, don't answer that, we already know. Because JVM contains Java, and Java = business, and you're not going to get to jerk off at work with an unproductive language if it doesn't have something a business type can latch onto! You don't want to admit "we want to use this language with a much smaller hiring pool, much less mindshare, unnecessary barriers to interop with one of the largest ecosystems in tech, which is very prone to creating unmaintainable nightmares in the long term by it's very nature."! You want to express it as "we want to use Java but with parenthesis can we huh can we pls pls k thnx".

Clojure is a garbage language that always gets defended with "you just don't get it". What a joke.

The distinction between Simple and Easy

Simple Made Easy by Rich Hickey

Sep 30, 2019 · 1 points, 0 comments · submitted by madsmtm
Sep 21, 2019 · 2 points, 0 comments · submitted by yarapavan
Sep 16, 2019 · slifin on Why Go and Not Rust?
Rich Hickey describes simple as unbraided, like a class is identity, state and schema all braided together

And easy as close by and accessible i.e. npm i latest-framework might be easy but not simple

isn't the same idea exactly covered by the term "(de)coupled"?
It can include decoupling, but no it's not synonymous.
This presentation had an outsize influence on my professional development as a programmer. If I've watched it once (and I have), I've watched it a dozen times.

edit: The "Limits" slide (go to 12:30 in the vid) is one that I really internalized early on. And looking at it again years later, the principles from that slide absolutely guide my app development:

- We can only hope to make reliable those things we can understand

- We can only consider a few things at a time

- Intertwined things must be considered together

- Complexity undermines understanding

For understanding complexity watch this video

Another way to thing theory of programming logic is a general understanding of language logic for which I recommend

I am going through that book myself right now. It came to me highly recommended. I don’t have a computer science degree.

Aug 02, 2019 · frou_dh on Experiment, Simplify, Ship
Here's the original location that has synced slides:
Jun 26, 2019 · valw on Simplicity Made Easy
If people wonder, this is NOT the same notion of 'simplicity' at all than in the classic 'Simple Made Easy' talk:

I think a more relevant title for this post would be: "any paradigm made straightforward in Perl 6".

Step 1) Buy a lot of paper. Too many ideas, concepts, and problems in programming are really really big and we have no idea how to effectively tackle them. Being able to take notes, write down your thoughts, create diagrams and pictures, etc is invaluable in being able to learn. Being able to go back and checkout your thoughts in the past helps a lot.

Step 2) You'll want to check out these videos and pass them along as you feel they are appropriate: John Cleese on creativity:

Philip Wadler on the beginnings of computer science:

Rich Hickey's Simple Made Easy:

Types and why you should care:

80-20 rule and software:

Jonathan Blow complains about software:

I've got a list of videos and other links that is much longer than this. Start paying attention and building your own list. Pass on the links as they become relevant to things your kids encounter.

Step 3) I spent a decade learning effectively every programming language (at some point new languages just become a set of language features that you haven't seen batched together before, but don't otherwise add anything new). You can take it from me, all the programming languages suck. The good news is, though, that you can find a language that clicks well with the way you think about things and approach problem solving. The language that works for you might not work for your kids. Here's a list to try iterating through: Some Dynamic Scripting (Lua, Python, Javascript, etc); Some Lisp (Common Lisp, Racket, Clojure); C; Some Stack (Forth, Factor); Some Array (R, J, APL); Some Down To Earth Functional (Ocaml, ReasonML, F#); Some Academic Functional (Idris, Haskell, F*); C#; Go; Rust

Step 4) Listen to everyone, but remember that software development is on pretty tenuous ground right now. We've been building bridges for thousands of years, but the math for CS has only been around for about 100 years and we've only been doing programming and software development for decades at most. Everyone who is successful will have some good ideas, but there will be an endless list of edge cases where their ideas are worthless at best. Help your kids take the ideas that work for them and not get hung up on ideas that cause them to get lost and frustrated.

> the difference of "simple" and "easy"

Don't know if you were already referring to Rich Hickey's talk on this, but if you weren't, it might appeal to you. Simple Made Easy:

"Okay, the other critical thing about simple, as we've just described it, right, is if something is interleaved or not, that's sort of an objective thing. You can probably go and look and see. I don't see any connections. I don't see anywhere where this twist was something else, so simple is actually an objective notion. That's also very important in deciding the difference between simple and easy."

Jan 21, 2019 · 1 points, 0 comments · submitted by peterkelly
One of the best engineering talks is about this notion that simple!=easy :

This is surprisingly often not understood, even by people I showed the video. And I am not sure why. But I do think it's necessary in out field to start understanding this much more deeply, especially for senior engineers.

Don't mistake easy with simple.

JavaScript is a simple language that can be made extremely complicated via "simple" tooling. You can open the node_modules folder and see how sausages are made. :-)

C++ is dealing with essential complexities, there is no silver bullet:

WordPress is _easy_. It most definitely isn't simple.

Highly recommend watching:

The issue seems to be that they are not typically watched on youtube. For example, the "simple made easy" linked above is a low-quality pirate youtube copy, the proper place to watch it is here:

I interpreted this as advocating for using a model with the lowest-level abstraction that you think will work. If you start with the simplest abstraction possible, you produce a simpler and more maintainable system. You're also in a better position to incorporate further abstraction later as your understanding of the problem space evolves.

This seems like a good opportunity to recommend Rich Hickey's talk "Simple Made Easy":

This is great, but completely lost on the crowd if what Simple means isn't understood.

One of the best clarifications of what it means to be Simple, to put it out there, is [1]; but the key point: Simple != Easy.

Simple means minimal coupling, high-cohesion etc etc.

Yet IME many developers do not understand the distinction and mistakenly believe that easy is the same as simple, and are willing to couple the hell out of the world under some false notion of "simplicity"...


That talk transformed the way I think about software development. I highly recommend watching it.
In a way, simplicity is the end result of reducing the complex and correct solution without affecting its correctness.

As in math, you come up with the "simple" solution of 0.5 only after you've realized that the "complex" solution is, for example, "sin(pi/4) * cos(pi/4)". There might be no other way to discover the simple solution.

I'd like to propose the "YAML-NOrway Law."

"Anyone who uses YAML long enough will eventually get burned when attempting to abbreviate Norway."


  NI: Nicaragua
  NL: Netherlands
  NO: Norway # boom!

`NO` is parsed as a boolean type, which with the YAML 1.1 spec, there are 22 options to write "true" or "false."[1] For that example, you have wrap "NO" in quotes to get the expected result.

This, along with many of the design decisions in YAML strike me as a simple vs. easy[2] tradeoff, where the authors opted for "easy," at the expense of simplicity. I (and I assume others) mostly use YAML for configuration. I need my config files to be dead simple, explicit, and predictable. Easy can take a back seat.

[1]: [2]:

This is a very good example of the problems of YAML and it's one of those things that has really preplexed me about the design of YAML. (I suppose it's a sign of the times when YAML was designed.)

It's[1] just so blatantly unnecessary to support any file encoding other than UTF-8, supporting "extensible data types" which sometimes end up being attack vectors into a language runtime's serialization mechanism, autodetecting the types of values... the list goes on and on. Aside from the ergonomic issues of reading/writing YAML files, it's also absurdly complex to support all of YAML's features... which are used in <1% of YAML files.

A well-designed replacement for certain uses might be Dhall, but I'm not holding my breath for that to gain any widespread acceptance.

[1] Present tense. Things looked massively different at the time, so it's pretty unfair to second-guess the designers of YAML.

This was fixed in YAML 1.2 though? So, e.g., in Python you'd just use ruamel.yaml instead of pyyaml.

That doesn't help you, of course, when using a multitude of existing systems whose yaml parsers are based on 1.1...

I've been bit by the string made out of digits and starts with 0 thing a couple times. In this case it gets interpreted as a number and drops leading zeroes. I quickly learned to quote all my strings.

I'd still love for a better means to resolve ambiguities like this, but I've found always quoting to be a fairly reliable approach.

The implicit typing rules (ie, unquoted values) should have been application dependent. We debated this when we got started and I thought there was no "right" answer. Alas, Ingy was correct and I was wrong.
I appreciate your humility and professionalism in a discussion thread that holds a lot of criticism; suffice it to say, I should have practiced a bit more humility and a bit less "Monday morning quarterbacking" in my original post. And I should have read your comment on YAML's history. To right the record: you got _so_ much right with YAML, and it's unfair for me to cherry-pick this example 20 years later. Sincere apologies...

As the saying goes, "there are only two kinds of languages: the ones people complain about and the ones nobody uses." YAML, like any language, isn't perfect, but it's withheld the test of time and is used by software around the world—many have found it incredibly useful. Sincere thanks for your contribution and work.

As someone who doesn't really use YAML much, your comment provides a good introduction to the kinds of things one needs to know before choosing formats in the future.
May 23, 2018 · 1 points, 0 comments · submitted by tosh
We Really Don't Know How to Compute: Gerry Sussman -

Zebras All the Way Down: Bryan Cantrill -

Jonathan Blow on Deep Work: Jonathan Blow -

Simple Made Easy: Rich Hickey -

Effective Programs - 10 Years of Clojure: Rich Hickey -

The Last Thing D Needs: Scott Meyers -

The first time I watched Simple Made Easy, I didn't like it, even though I'd written quite a few situated programs in my day. A year later, I'd learned Clojure and re-watched it, and it all made so much sense. It's now one of my favorite tech talks.
(via Deep Work)

How to Depth Jam:

I hope I found We Really Don't Know How to Compute: Gerry Sussman talk with better resolution and camera on the board
Gerry Sussman talk is awesome and reflects very well the currently state of computer programming. It's a shame. The worse part: there is people around us with a lot of pride ABOUT DON'T KNOWING TO COMPUTE BUT STILL DOING [INNEFICIENT] THINGS. (sorry for the caps, good bye)
Rich Hickey's Greatest Hits:
More Rich Hickey:
Rich Hickey is great. I remember his Simplicity Matters keynote at Rails Conf 2012. So clear and insightful.
Being able to explain a complex topic to diverse audiences is not easy to do. Rich does it very well.
Thanks. Forgot about that.
One of the big benefits of clojure being dynamic is that everything is data (e.g. a map, set, vector or list).

This is what allows reuse.

- The vast core library of functions that manipulate those data structures can be used for everything in your program, cos it's all data.

- Most clojure libraries take and/or return data, reducing the need for clumsy adaptors, or even worse not being able to get at the data you need cos the library writer was really enthusiastic about encapsulation of everything they thought was of no use to consumers.

- You don't have a person class, you have a map with a first name and last name. Now the function that turns first + last name into full name can be reused for any other map with the same keys. (A rather spurious example, but a real one would take a large codebase and an essay to describe)

I can only recommend watching some of Rich Hickey's talks, particularly these ones, they're not entirely about types, but they express the above ideas much better than I can:

- Simple made easy

- Effective programs

- Are we there yet? (this one is more about OOP, but unless you're using something like haskell, idris etc its relevant for your type system of choice)

The data types in Clojure can be very easily (and better) expressed in (say) Haskell. For example:
The main issue is that Haskell is not a data-oriented language by default, this means its no fun to push it to be that. For example, I also have to use java in my job, I use persistent (functional) data structures all the time, but Java is not built for it, its not fun. (Although definitely more fun that using Java's mutable structures, ewww)

Also I personally find that to be too much overhead and ceremony in return for some type checking at compile type, as opposed to spec checking at runtime.

> The main issue is that Haskell is not a data-oriented language by default

What do you mean by "data-oriented language"?

In the grandparent comment's link (showing clojure data in haskell): I'm pretty sure that is not how people code in Haskell, its not how the libraries are usually designed etc etc. Using only data is definitely possible in Haskell, but it's not encouraged by default, the core abstractions are used for concretions of information.

In the same way you can do immutable and functional stuff in java, it's not going to mesh with the rest of the ecosystem or language around you.

> One of the big benefits of clojure being dynamic is that everything is data (e.g. a map, set, vector or list).

What about this can't be done with types? Simple parametric-polymorphism gets you pretty far. Row types allow you to handle "maps as records" in a type-safe way. The rest is just having support for some kind of ad-hoc polymorphism so that you can re-use your functions on that small set of types (type classes, ML-style functors, interfaces, protocols, etc.).

Again, I would refer you to the Rich Hickey talks, I'm not very eloquent on this. I think its about the manual overhead that constructing your hierarchy of types, plus the cognitive overhead of doing all the fancy things in your brackets.

I'm familiar with the advantages of type systems (my progression was Java -> Haskell -> Idris) but I found my personal productivity (even in larger systems built in a team) was best in clojure. I didn't feel that the guarantees given to me by the type system were worth the mental overhead, a lot of people feel differently (you amongst them I'm guessing :p)

As a closing point, if I were to ever build something that truly had to be Robust in a "someone will die if this goes even slightly wrong" way, I would reach straight for Idris and probably something like TLA+. However most of my development revolves around larger distributed systems communicating over wires, still resilient but in a different way. Mainly I use clojure.spec in core business logic and at the edges of my programs, for generative testing and ensuring that the data flowing through the system is sensible.

This looks like the perfect example to illustrate the point that Rich Hickey tries to make in "Simple made easy" [1].

This huge call stack has been designed to make your life as a developer easy but the price you pay is an enormous amount of complexity.

I've been working a lot with a similar Java web stack and I feel how painful this complexity is. What is worse, is that I think that a lot of this complexity is incidental. There are libraries and frameworks designed to make some things easier, but in the process end up creating a lot of problems that then requires another library or framework to overcome that problem which also has other problems and so on... The result is a huge stack like this.

One concrete example of this is Hibernate. A tool designed to make it easier (apparently) to work with databases, but in the end create so many problems that the medicine ends up being much worse than the disease.

Resolving an HTTP request that returns a the result of a database call should not be this complicated! HTTP is simple! Why do we need so many calls to so many things. I'm not advocating for a flat stack of course, but certainly a stack this deep is a clear sign that something is wrong.

I very much agree with Rich Hickey, we need to stop thinking about how to make things easier and start thinking how to make them simpler.


>One concrete example of this is Hibernate. A tool designed to make it easier (apparently) to work with databases, but in the end create so many problems that the medicine ends up being much worse than the disease.


At our startup we had the choice to let 20 programmers write custom individual SQL statements for 100s of CRUD operations, or create entities and let Hibernate generate them for us.

We used hibernate and it has worked out well.

I can't imagine how it would have been to debug 100s of bespoke SQL queries and associated object mapping code, each written in the developers unique style after a few years.

That would have been fun.

Thanks for sharing your experience. I have worked in both kind of projects. Both very big and heavy based in database access. One using Hibernate and one using plain SQL. We've had considerably more problems with the added complexity of Hibernate.

Hibernate does not save you from writing queries. You are still writing queries, just in a language different than SQL (e.g. JPA). It's an abstraction layer. The problem is that this abstraction is very leaky, so if you really want to write performant code with Hibernate you do need to understand how SQL and your database works. And if you really understand how it works, you end up realizing that the abstraction is kinda pointless because SQL is already a really fine abstraction over your database.

And if you need to scale, for example working with a replication setup with multiple db servers and having to deal with eventual consistency, then Hibernate really complicates things.

I think Hibernate is a good example of something that makes things easier at the beginning. At the cost of enormous complexity and difficulty in the long term.

I think ORMs help the most when you have a lot of entities and you need to enable CRUD operations on them.

By all means you can use a combination of raw SQL and an ORM.

Let's be clear, this discussion applies to all ORM's not just hibernate. And yes, any team that adopts an ORM hammer and attempts to use it for all database access, is going to have a bad time. Use ORMs for CRUD, for anything else, use SQL. Hibernate actually makes this really easy.

Gavin King:

Well in fairness, we used to say it over and over again until we were blue in the face back when I was working on Hibernate. I even remember a number of times getting called into a client site where basically my only role was to give the team permission to use SQL for a problem that was clearly unsuited to ORM. To me it's just a no-brainer that if ORM isn't helping for some problem, then use something else. [1]


What I can't fully get my head around is how defensive people get about things like Hibernate. I've tried it out, and it doesn't do much for me, but it doesn't really get in my way, either; I can work just as fast with Hibernate as I can with JDBC. I think part of the reason for that, though, is that I can work at either level; I can work out in my head what Hibernate is doing and work with it rather than against it. Somebody higher up retorted with, "why not just write your own web server?" Indeed, why not? I've done it for relatively simple REST-API type cases; as long as you don't need a lot of the more complex HTTP cases like continuation messages, caching, digest authentication and redirects, why not? It's nice to have everything under your control and it's almost definitely faster than any third-party solution that's going to have been written to deal with dozens of corner cases that aren't relevant to what you're doing.
>This huge call stack has been designed to make your life as a developer easy but the price you pay is an enormous amount of complexity.

This particular problem could be solved by just having a good filtering UI.

You don't have to analyze the stack in its raw text form.

That being said, I agree that complexity in the Java world is often much higher than it needs to be, and sometimes the tradeoffs are not worth it.

Resolving an HTTP request that returns a the result of a database call should not be this complicated! HTTP is simple! Why do we need so many calls to so many things. I'm not advocating for a flat stack of course, but certainly a stack this deep is a clear sign that something is wrong.

Http is pretty simple, executing sql queries against a database is simple-ish (close those connections!). Authentication, authorization, marshalling, unmarshalling, transaction boundaries, ..., are not so simple, especially not when all taken together.

People bemoan java as you are doing here, but the reality is other languages and frameworks, any that attempt to address the same problems and concerns have the same level of complexity. Java has the advantage of kick ass tooling, debugging, and monitoring infrasture, a lot in the jvm itself (visualvm).

Just to clarify, I am not criticizing the Java language. I'm criticising the use of excessive layered frameworks that increase complexity.

I like Java. It's simple and performant and has excellent tooling. I just don't like that sometimes I see a lot of incidental complexity in its ecosystem.

Rick Hickey's Simple Made Easy permanently made me a better programmer:

Also his talks on transducers in clojure changed the way I think about functional programming

Oh I forgot about that! I actually have some notes:
I agree. If not “killing” at least “severely” slowing us down. This Rich Hickey talk deserves a link here and it’s right on point:
Jan 30, 2018 · anonfunction on Write dumb code
It's not by the author but Rich Hickey (creator of clojure) has a great talk titled "Simple Made Easy"[1] which I always recommend.

Furthermore I have been using Golang and would say it is very simple language that anyone could pick up and become productive with quickly. One of Go's proverbs is "Clear is better than clever."[2] At the expense of a little verbosity there is much less ambiguity in the intent of code.

1. 2.

> Rich Hickey has this thing where he talks about "simple versus easy". Both of them sound good but for him, only "simple" is good whereas "easy" is bad.

I don't think I've ever heard anyone mischaracterize his talk [1] this badly.

The claim is actually that simplicity is a fundamental property of software, whereas ease of use is often dominated by the familiarity a user has with a particular set of tools.


Agreed, but I have see a lot of people come away from the talk with an unfortunate disdain for ease. Ironically, in disentangling "simple" and "easy", Rich created a lot of confusion about the value of ease.
Dec 25, 2017 · vlaaad on Perceptions of Code Quality
You mistake simplicity for performance. Simplicity is about lack of interleaving of abstractions, it's about one concept, one task, one role, single responsibility etc. I recommend Rich Hickey's talk "Simple made easy" for that matter:
Performance is faster execution and lower resource consumption. Perhaps this isn’t so much a factor anymore with low level languages, but in high level languages with several layers of abstractions and giant frameworks there are huge opportunities for writing faster code.
I agree that Redux is a horrible pain.

A week ago I started searching for a simple yet powerful solution for the state problem in React. After trying 3 libraries (Baobab, Cerebral and react-cursor) and discarding without trying a bunch more (Derivable, partial-lenses, Cycle and others), I ended up writing the app in Elm (still doing it).

Federal seems like a better Redux, but still too complected[0]. Ideally, I would want something like Baobab (a central store with cursors/lenses and event emitters), but with immutable data structures (not Object.freeze) and without the bugs. Since this ideal will never come (and I won't write it myself) I'll probably use Federal for my next app that could not be written in Elm.


Hmm interesting - I'll check out Baobab
"- My favourite thing: everyone tells you how easy and simple Rx is: it's just observables. In his book on RxJava the creator of RxJava says that it took him several months to understand Rx. While being tutored by one of the creators of ReactiveX. It's "easy and simple" in the same sense as "Haskell is easy and simple" or "rocket science is simple and easy" or <any branch of human knowledge> is simple and easy once you know and understand it."

The problem here is that "simple" and "easy" are two completely different concepts. "Simple" is absolute, "easy" is relative.

Rx is neither simple nor easy, for non-trivial projects. Its an incredibly leaky abstraction and you end up having to understand the internals to do non-trivial things. Understanding when something runs in what thread (and in RxJava knowing when to use subscribe/subscribeOn/observeOn was much harder than it claimed to be), how to correctly handle errors, retry failed operations, apply backpressure without dropping data — these things essentially force you (in my experience, at least, but I’m no Rx expert, just used it for a few months) to dig into the internals to understand how they work: ie not simple.

But because of its lack of simplicity, it was also incredibly hard to use, to make it do what you want. So it was neither simple nor was it easy.

(And yes, I buy into the differences between simple and easy)

If you haven't watched it before, I'd recommend "Simple Made Easy" by Rich Hickey. [0]

The reason I say that is because you say "conceptually simple" as if that's a bad thing. Maybe we have to agree to disagree, but in choosing a framework I would much, much rather go for the one that is conceptually simple (at the cost of some extra verbosity in certain cases) over one that is conceptually complex but covers up that complexity with a terse-but-incomplete API.

You're not going to understand the benefits of the Vue vs. React choice by looking at idealized code samples, which is all your comment is showing. You'll only know it once you get into the edge cases. For example for list iteration in Vue...

- do you change that example to omit the last item?

- do you change that example to render a different element for every other item?

- do you render something different if there are no items?

That's what makes the JSX approach simple. Once you understand that you can use any Javascript expression you want, you don't need to learn further. All of those questions can be guesstimated by a newcomer.

But with Vue you have to learn each and every "directive" and "modifier", and consult the docs again each time you forget them.


20 minutes vs an afternoon is probably not a great gauge for making technology choices.

I highly recommend watching Rich Hickey's "Simple Made Easy" [1] talk which covers how the right ("simple") choice may not be the "easiest" (convenient, most familiar) one.


I agree. Hence "that and".
I don't mean to be an arse, but if you agree with my point, then maybe you can see why I disagree that your "that and" is a valid strike against React/in favour of Vue.
Simplicity makes picking up the unfamiliar easier. You can't accurately deduce from time alone that the time to pick up Vue was based on familiarity with similar libraries.
> Simplicity makes picking up the unfamiliar easier.

The talk I referenced talks about how the opposite is often true. Tools that result in objectively simpler systems can come with a initially steeper learning curve.

> You can't accurately deduce from time alone that the time to pick up Vue was based on familiarity with similar libraries.

True, I was really just suggesting questioning instincts when evaluating tools based on the initial 'time to get started'.

> "The talk I referenced talks about how the opposite is often true. Tools that result in objectively simpler systems can come with a initially steeper learning curve."

I'm aware of Rich Hickey and Clojure. In my experience with Lisps, although they are superficially simple, they make you do more abstraction work than is necessary in more commonly used high-level imperative languages. Lisp seems to strongly encourage building a high number of helper functions, which is fine if you're highly opinionated about how a job should be done, and less so if you just want to import some battle-tested libraries and write something that gets the job done. I suspect this is where the learning curve with Clojure really comes in, in that it's more closely related to being learn how to architect an application in a Lisp-friendly way than it is about getting familiar with the language.

Totally agree actually, I love all Rich's talks and agree with almost every word of Simple Made Easy but I don't necessarily agree with the conclusion he takes it to (Clojure).

I've heard it suggested somewhere that possibly the leap is in believing that 'a simple thing + a simple thing = a simple thing'.

I submitted this link before I had watched the whole thing. As someone who has only dabbled in Clojure I think there are a lot of interesting ideas in there but found the type-system bashing pretty off-putting.

I am now watching his "Simple Made Easy" talk [1] after I have heard it recommended on a few functional programming related podcasts. Again really interesting stuff but I encountered another cheap shot at typed functional programming ("You can't use monads for that! Hurr hurr hurr").

Given how well received these talks seem to be by people that enjoy programming with advanced type systems I would have have really expected a more balanced discussion and some acknowledgement of the trade-offs between dynamic and statically typed functional programming.


I really like Rich's views and find Clojure very interesting as well. That said, as a Java shop with Javascript frontend, nowadays the bulk of complexity in our code base seems to accumulate in the frontend due to mixed skill levels of the team and lack of opinionated structure in the language. This leads to some rather messy code that even skilled devs are afraid to touch because of lack of feedback from the IDE that some refactor is working without loose ends.

The same problem with the same people just doesn't happen in the backend and I link that to static typing and IDE maturity. We have started to adopt Typescript and are seeing improvements already.

We just have to live with the fact not all developers working in the code are mature enough to avoid language and code organization pitfalls. Refactoring should be mostly a safe endeavor, even if only structurally.

This is the main reason I wouldn't suggest Clojure for our team.

I agree that there is definitely added discipline needed to succeed well in large dynamically-typed projects. I also think that learning to build large projects in such languages is like running your marathon training high in the mountains, so when you get back to sea-level your body feels the joy. You are forced to write very clean code in Clojure if you want to easily maintain it later. That's a great skill that translates to any other language where less discipline might still get you far.
I think it's something else, as well. Rich even mentions it in his talk: languages like Java (which I'm reading to mean "statically typed") are great at mechanical tasks. Front end programming is mostly filled with mechanical tasks: scaffold this structure/layout. Wire up these events. Make this thing blue/bold/etc. Change the state when these events happen. It's fairly predictable in structure in line-of-business apps, at least once you're following an intelligent structure, e.g. the Elm architecture.

UI/Front end dev, IMO, can gain quite a bit from static typing. I'm a huge fan of clojurescript, it's what I reach for whenever I want to work on something, but I'm super excited about ReasonML for the future of my team; we struggle with our JavaScript code base right now due to the lack of imposed structure and feedback for our weaker developers.

I love Clojure and I think it makes sense in a lot of domains; most of my back end development is "take this data, transform it according to some nebulous business rules, and poop it out to some other place," which Clojure is amazing for. It's great for applications that don't require a lot of "wiring", and require a lot of "flow". UI programming is, for the most part, wiring things up. It's not that Clojure/Script is not up to the task (I think e.g. re-frame, and the stuff being done with Fulcro, is amazing) but I definitely see the benefits of static typing more in that domain.

And like Rich said, if you're doing UI it will usually completely dominate the problem space you're working in. So pick the right tool for the job. I'm not convinced TypeScript is the way exactly, but like I said, ReasonML and Elm are super promising.

> but found the type-system bashing pretty off-putting.

Why do you think it is type system bashing?

He is justifying why he didn't add types to Clojure. In his experience they add more complexity than they are worth.

The reason he talks about it at all is there are a lot of static typing enthusiasts who talk about static typing being a game changer.

In my experience static typing is a +-2-3% productivity influencer. You get a bit better IDE experience and refactoring is easier. On the other hand I've also found I need to refactor my C# code far more often than my Clojure code.

Gotta take the good with the bad. Tons of knowledge and wisdom to be gained from the FP folks but sometimes they do have the cheap shots and the bias of the community.

Ie, It’s easy to hate and joke about things like SQL databases and JSON when you live in your own utopian fairy land where everything is Datomic and EDN.

I believe the quote you're referencing about monads is "this is meant to lull you into believing everything I say is true, because I can't use monads for that" (referring to an animation of a stick figure juggling)
The new Conj talk is certainly an interesting look at one man's (or one community's) look at static typing. However, as much as I admire Rich, some of the points he made don't resonate with me, particularly the one about how compile-time checks to catch minor bugs in syntax are not a particularly useful feature of static typing. I certainly disagree. As someone who writes Clojure all day long right now for a living, I am constantly dealing with runtime errors that are due to minor typos in my code that I have to track down, and this time would be greatly saved by having a compiler tell me "on line 23 you looked for :foo keyword in a map but you meant to type :foobar, so that's why that was nil" and many other similar woes.

I love Clojure but I really miss static type checks.

The other item in his talk I do not agree with, he says (slightly paraphrasing) "in static typing, you can pattern match on something 500 times but if you add a case, you have to update those 500 matches to handle the new case, when really they don't care about this new case, only the new logic needs to consume this special case, it's better for the producer to speak directly to the consumer". Well, in languages like OCaml, Swift, Haskell, it is a feature that pattern matches much be exhaustive. This prevents bugs. In most cases, I'd expect that if I add a case to an enum, the chances are good my existing logic in pattern matches should know about that. Maybe not all, but a lot of them will. It's nice to have the compiler guide you to those places.

I certainly like how fast I can write programs in Clojure, and I like the minimal amount of code that makes refactoring and rewriting fairly straightforward since there is not a lot of time investment in the existing number of lines, and I like the incredible elegance of Clojure's approach to functional programming.

But I do miss having much greater compiler assitance with typos, mis-typed argument order to functions, mis-typed keyword names, etc. Would really save a lot of time.

Still reading your comment, but after the first paragraph, I would kindly suggest looking at clojure.spec. It's helped me immensely in similar problems.
I suppose you'd have to use spec/assert for every instance of destructuring or "get" or "get-in" to avoid common mistakes. That's a lot of asserts everywhere.
I don't understand this comment.

I spec types, and then I spec functions that need that type. But not all the function, just the heavy use ones.

I usually don't instrument the spec'd functions unless I'm actively debugging.


after having a minute to think on it, do you mean to catch a typo in the use of get, get-in, etc? I haven't tried that.

I suppose you could wrap get, get-in with a nil check or something.

> I suppose you could wrap get, get-in with a nil check or something.

Indeed I suppose the solution would be write wrappers around common getters that allow you to pass a spec to the query and have them automatically assert that everything is what you expect.

> As someone who writes Clojure all day long right now for a living, I am constantly dealing with runtime errors that are due to minor typos in my code that I have to track down, and this time would be greatly saved by having a compiler tell me "on line 23 you looked for :foo keyword in a map but you meant to type :foobar, so that's why that was nil" and many other similar woes.

i wonder if this is because it really takes a quantum leap in one's development style between <insert your previous programming language> and clojure/<insert your favourite lisp>? as long as your environment allows for effortless evaluation of code you're writing, you'd be getting this feedback no slower than the edit/save/compile/retry cycle.

If your typos are triggered by UI events, then you often won't see these problems until interacting with your UI (I work mainly in Clojurescript). Further, these typos may not get noticed at all for a long time if a code path is never taken. Of course, that's what unit tests are for. But writing tests takes time also. I'm not sure it's worth the trade off to spend the time writing those tests that I could spend writing in a more statically-typed language that would catch some things that tests wouldn't be needed for. (Besides, writing tests for UI stuff is pretty hard).

I am griping, really, because I cannot stress enough how nice it feels most of the time to write Clojurescript. But in complex projects, there is not doubt that a lot of time gets spent on things that wouldn't need to be spent if the language had even a very basic type system to back up the syntax for some things.

which ui library are you using? not claiming to be an expert, but i always found it easier to test programs when logic is completely decoupled from event flow. but yeah, UI can be pita.

also isn't clojure.spec useful for describing and asserting the shape of data taken and returned by functions?

Clojure.spec is useful for a lot of things, but unless you are adding spec/assert to nearly every destructuring or "get" or "get-in" then it's still easy get nils running through your data transformations because you mistyped a keyword or something.

Also there is not a good answer for asserting the value of a function passed to another function; the return values of functions can be spec'd but they are not included in an assert test.

> the return values of functions can be spec'd but they are not included in an assert test

I agree this is a shortcoming, but that is why this library exists:

My team recently settled on TypeScript instead of ClojureScript, as TS is the safer bet, more familiar, more consistent with the existing project's tooling, etc. But man... I've taken a handful of files and written them in both TS and CLJS. CLJS is just so much shorter and elegant. I sometimes think we made the wrong decision.
ClojureScript is great with Reagent or re-frame... If you write Angular use TypeScript. If you use React, ClojureScript! It's a match made in heaven.
Yeah. I've built toy apps with re-frame, and really liked the way the code looked. But my team is pretty Jr other than me, and I wasn't sure if ClojureScript would work well for us as a team. VS Code is our editor of choice, and it is just really a good environment when paired with TypeScript.

Also, my experience with Rails really has me fearful of doing any serious, big work, in a dynamic language.

Just a quick comment. I think the differences go beyond values, to what you might call world views or paradigms (in the Kuhnian sense). Take, for example, the value of "simplicity". This is extremely overloaded. I doubt the speaker and I would agree on what is simple. I'm not familiar with a lot of the examples they used, but I'm going to guess they would consider C simple and something like Haskell as "not simple". I think that C is familiar but not simple (too much undefined behavior) and Haskell is simple but not familiar; more generally people conflate simple with familiarity. There is a nice Rich Hickey talk "Simple Made Easy" ( on this theme, or a blog post I wrote "Simple is Not Easy" (

Similarly in the discussion of promises. I have written a lot of code using promises---though not in Javascript---and it's fine to debug. Javascript just makes a mess of it because it can't decide on what world view it wants. Is it trying to become a modern somewhat-functional language, which is the direction Ecmascript, Flow, Typescript, etc. are going? Or it is a primarily imperative language? If you go for the former you have very different expectations about how the language operates than the latter. It's notable that most functional programmers (which is my day job) don't make much use of debuggers and the kind of tools the speaker talks about building are not generally valued that much.

Now I don't want to give the impression that I think the speaker's world view is wrong. It's just different. Notable though is that we would have fundamental disagreements in how we view the world. It's not that we value, say, simplicity differently. We disagree on how simple is even defined.

It's like LEGO's. The blocks are sturdy and the rules of how blocks fit together are simple. But building a mini Taj Mahal is still not easy. You do have to know basic physics & structural engineering, but if you do, the task is very do-able. Even fun.

Unlike if you were building a mini Taj Mahal out of match sticks, Elmer's glue, rope, and playing cards. Even a professional structural engineer would have a hard time with that. The rules of gluing a match stick and a playing card together are already complex (not to mention uncertainty about where to rope fits in), and it that makes it that much harder to make a final product.

This video goes into "simple" vs. "easy"

I think he goes by Pete ;)

Edit: While I'm here, might as well link to the Simple Made Easy talk by Rich Hickey. The other thing (aside from inventing Clojure) that gave him prophet-like status in the community.

Rich Hickey is one of the rare breed of thinkers in the programming world whose ideas have great relevance even if you're not interested in the language he invented.
Jun 15, 2017 · frou_dh on Go and Simplicity Debt
After seeing Rich Hickey's excellent material on the matter^, I can no longer read anything talking about Simplicity and Complexity in programming without suspecting the author of being fast and loose with what those terms specifically mean.

As it stands, they are recipe for different camps and sub-camps of programmers to talk past each other endlessly.


I tend to agree with you, but I think Cheney has done an excellent job here. Did you even read TFA?
Yes - evidently not as impressed. It's against the site guidelines to ask that btw.

For example, I cannot accept that having no means to define immutable structures makes for an overall "simpler" programming model. What could be simpler than allowing information to be information?

Whether having an additional concept makes Go more burdensome to learn and implement is another matter, and is on a different axis to Simplicity/Complexity (again, using Hickey's excellent deconstruction of simple/complex vs. easy/hard).

Not that you don't have a point, but I actually prefer fast and loose with most terms. Ironically, I find it leads to simpler conversations. :)

It can lead to some misunderstandings, but I think those are usually given more voice than they are worth.

Also ironically given everything I just wrote, I found that an odd mark for a footnote. I instinctively look up when I see the caret. Usually for a superscript, but not seeing one my eye kept going.

> I actually prefer fast and loose with most terms. Ironically, I find it leads to simpler conversations.

You mean simplistic conversations right?

The problem with being fast and loose with terminology is that it lacks precision; and with lack of precision comes ambiguity and misinterpretation, which beats the whole point of good communication.

Sorta. I'm reminded of the point Feynman made about keeping everything as "layman" in explanation as possible. His point was basically not to hide behind jargon and highly specific terms in trying to explain something.

So, if communication is hinged on highly specific meanings of words, the odds go way up that someone will not actually hear what you think you are saying.

Instead, keep conversations high level and do not rely on the specific meanings. It requires more thought from the listeners, in some ways, but it actually relies on less pre existing knowledge from the listeners.

It is tempting to think you have narrowed your audience down to non laymen. This is often an incorrect assumption, though.

And in writing, this can go out completely. There is a place for highly specific and very precise language. It is usually best along side the non-specific language.

Simplicity is very easy to objectively measure. Write down a formal semantics for the programming language in question, and count how many pages you used.

But, of course, nobody will actually do this, because it would expose the inherent complexity of designs advertised as simple. Many people's feelings would be hurt in the process.

As long as the result is shorter than 1500 pages (afair), your language is simpler than C++.
And the C++ specification isn't even a formal one.
Why don't you do it?
Because I have no time to study languages I dislike, and the one that I do like (Standard ML) already has a formal semantics and a type safety proof.
I would argue that is a good measure of the simplicity of the language itself, but not a measure of the simplicity of the use of the language. By that measure, Malbolge is a simple language than C++ by a factor of ~1000. However, it is still much simpler to write code in C++ than in Malbolge.
I said absolutely nothing about ease of use.
For many simplicity is defined by ease of use though.
Ease of use is subjective. It depends on people's goals, skills and even tastes.
May 05, 2017 · mambodog on Build Yourself a Redux
Sounds like you're confusing 'simple' with 'easy'. Rich Hickey does a good job of contrasting the two in Simple Made Easy[0].

The essential part of Redux is only 44 lines of simple code [1]. You can understand everything that it is doing. That is simple. It doesn't mean that it's going to be a great experience to work with (you might want to add some abstraction on top to make it also 'easy'), but it is definitely simple.



> Rust doesn't compile that way -- you can't compile individual modules at once, only the entire crate.

I think we have a terminology mixup. I was using 'module' in the win32 LoadModule() sense: a shared dynamically loaded library (ie. a .DLL in windows or .SO in linux.) I'm not sure how Rust crates (or other compilation units) map to those - my guess would be that a given crate will be compiled into (in win32 terms) a .exe .lib or .dll

I /think/ the Rust equivalent of the case I'm describing would be that you have a struct that's part of the public API of a crate, and it's being used across multiple crates in a large project where you don't want to fully recompile the world in order to test your changes.

> Of course you may be in a situation where you can't rely on the debuginfo (stripped binary or something?), in which case this will be annoying. But it's really a similar situation as you have with inlining when you don't have debuginfo.

In my C++ experience there end up being plenty of cases where it's really useful to be able to inspect raw memory (ie. hex dump, with no debugger or without enough context for the debugger to help you) and figure out what was going on. Obviously Rust is designed to dramatically reduce the frequency of that kind of debugging, but to me this still feels more like a simple-vs-easy trade off [1] than a strict win.

> The presence of ADTs in Rust mean that the layout of many types isn't immediately obvious without debuginfo anyway.

Pardon my Rust ignorance, but is this scenario significantly different from C++ templates? The layout of a (judiciously) templated C++ class may not be "immediately obvious" but in practice it's often still very straightforward to infer.


> I'm not sure how Rust crates (or other compilation units) map to those - my guess would be that a given crate will be compiled into (in win32 terms) a .exe .lib or .dll

You're correct.

> but to me this still feels more like a simple-vs-easy trade off [1] than a strict win.

If you're meaning the easy side is stopping people having to reorder fields themselves, it's more than that: generics plus C++-style monomorphisation/specialisation mean there are cases when it's impossible for the definition of the type to choose the right order. For instance: given struct CountedPair<A,B> { x: u32, a: A, b: B }, all three of CountedPair<u64, u64>, CountedPair<u64, u8> and CountedPair<u16, u8> need different orders.

> I think we have a terminology mixup.

Not really -- my core point was that C++ compilation units are usually smaller than Rust.

Most C++ codebases I've dealt with will be of the kind where there's a single stage where all the cpp files get compiled one by one. Not a step by step process where one "module" gets compiled followed by its dependencies.

For these codebases, you have a huge win if you can touch a header file and only cause a small set of things to be recompiled. For Rust codebases, it's already a large compilation unit, so you're usually already paying that cost (and with incremental compilation the compiler can reduce that cost, but smartly, so you get a sweet spot where you're not compiling too much but are not missing anything either).

But yes, being able to skip compilation of downstream crates would be nice.

(You're right that a crate is compiled into a .exe or .so or whatever)

> Pardon my Rust ignorance, but is this scenario significantly different from C++ templates? The layout of a (judiciously) templated C++ class may not be "immediately obvious" but in practice it's often still very straightforward to infer.

ADTs are tagged unions. There's a tag, but it can sometimes be optimized out and hidden away elsewhere.

You can mentally unravel templates to figure them out. Enums are a whole new kind of layout that you need to understand.

There are two specific cases here where the layout is not obvious.

The first is the null-pointer optimization (I think this is the official name but I swear I question myself every time I mention it), in which we use knowledge that an inner struct contains a reference to avoid enum discriminants. that is, Option<i32> will have an extra field up front saying if it's None or Some, but Option<&i32> will just encode None as the null pointer because references can't be null. This also optimizes something like Result<&i32, ()>. The net result is that a lot of stuff that looks expensive is basically free. There has been discussion of extending this to use multiple pointers so that we can hit more complicated enums like Option<Option<(&i32, &i32)>>, but this has thus far not happened.

The second is enums themselves. The discriminant algorithm is not obvious. If you want a discriminant of a specific size, you can pick it with a repr. But otherwise it's implementation defined.

And there is one third thing we have discussed doing but haven't yet. If you have a bunch of enums nested inside each other, having multiple discriminants is a waste. There is no reason the compiler can't just collapse them down into 1 in a lot (but not all) cases.

For anyone who wants to know the specific algorithm for all of this, it's now all in one place: src/librustc/ty/

Beautiful insight

I wish I could take credit for this one! I learned the distinction from a (rather famous) Rich Hickey talk. [0]


Mar 21, 2017 · smt88 on Ask HN: How do I code faster
I don't think about "less code" when I'm writing code. You write something (ideally) once, and you read it many times. It's very inefficient to optimize for code-writing when the most expensive activities are learning, re-learning, and maintaining code. If your code is twice as long but easier to understand, you should just make it twice as long.

As far as more code reuse, the tools I mentioned don't affect that. A good rule of thumb is not to write the same code twice. If you write it a third time, move it into a reusable function. I actually rarely write the same code even twice.

So yes, most of the savings come from 1) not having to debug and 2) not doing maintenance until I want the code's behavior to change. With great static analysis and a type system, you might spend 5x more time writing before you run your code the first time, but it always just works when you do run it the first time. It's amazing.

This is a famous talk by Rich Hickey that will discuss some of these issues much better than I can:

(Video on the left)

Thank you for taking the time to respond and the link.
Feb 18, 2017 · espeed on Reasonable Person Principle
Yes, fallacies of definition are one of the primary reasons for misunderstanding, e.g. we both are using the same word, we both have an idea of what the word means and/or are using it in a specific way; however, we both think the word means something different, and we both assume the other person is using the word in the way we are. Piles of disagreements have been built on this one simple fallacy.

That's one of the reasons why I like how Rich Hickey begins all of his talks with precise deconstructions of the definitions for the words that are integral to the theme of his talk, as he does in "Simple Made Easy":

Once you establish a common understanding for the meaning of the words you are using, you have not only cleared up any potential misunderstandings, but you have also implicitly established points of agreement and have established a solid foundation to build on.

Feb 16, 2017 · 168 points, 36 comments · submitted by nailer
This is a classic. Definitely worth watching for everyone working within IT.
One thing I've found super helpful on my current project (which happens to be node) is using OSS concepts and npm as a unit of modularity.

Eg, everything is just a grab bag of functions in an npm module (sometimes with a closure holding some state - I either reject or don't understand FP people when they claim FP doesn't have state).

Each module has tests, dependencies, a README, and if it is reusable by other projects, is even OSSd and published. Writing software as if it's going to be published makes me be more modular. Being modular makes things easier to reason about and therefore has stopped by codebase from becoming complex to work with.

FP people have never claimed they don't have state. That's a straw man used to argue against something no one is saying.

The claim is that there is no hidden state - everything is made explicit.

I have no argument against FP, not am I making one here. But plenty of FP advocates claim FP "avoids state". It's not a straw man, it's just experience from asking people to explain FP.
> I either reject or don't understand FP people when they claim FP doesn't have state).

But that's not what state is, closures are just an easier way to define lots of functions with similar parts.

EDIT: Sure, you can call "whether or not f() or x are defined at the moment of calling (y) => f(x, y);" a form of "state", but this is called late binding and is simply not a thing in purely functional languages like Haskell; the existence of f() and x is checked at compile time.

I think you might be confusing state and values. Pure functional programming has plenty of values. If you need a new value, you just return the new value, instead of reaching into an existing data structure and messing with it.

If a function closes over immutable values, then the resulting closure is an immutable value. If a function closes over mutable state, then its mutable state, often even uglier then mutable objects or structs which actually make the exact contents easier to identify at least.

"I either reject or don't understand FP people when they claim FP doesn't have state" FP has state, but it makes it explicit avoiding side effects inside functions and using persistent data structures, that means, instead of mutating the state you create a new state. Without state basically any program is totally useless.
At some point it felt like Clojure was the future, the new thing, so amazingly better - was that just a feeling of novelty? Or something went wrong with its use case?

Of course, these days its about Rust, Swift and LLVM, but it doesn't have those lispy properties we love...

Have been using Clojure in production for several years.

It is great language to work in. I have found it very suitable for solving a wide range of problems. Many companies are using it successfully.

Sounds like your view of reality is based on the HN hype cycle. As far as I can tell there are many more companies using Clojure in production than Rust. (nothing against Rust, but just as an example of the bias)

I think Clojure is doing just fine. I've seen it used in "real-world" proprietary software (custom-made for a client by a third party). It's just usually packaged as a jar file, so no-one notices unless you look for certain tells.
Currently using Clojure on a side-project. It makes me so much more productive -- a real win when I don't have a ton of hours to devote to a project due to also having a day job :)

If only I could find a day job using Clojure...

I've been using it for 3 years and as I get my teeth further into my current project, I am grateful for Clojure every day.
It just got old. Those who wanted to check it out already have, those who liked it either got a job using it or have spent enough time with it to get bored, and those who didn't have probably forgotten about it already.

People just need a change every now and then, you can't get excited about stuff you see or use every day after a while.

I felt like that, like Clojure was the future, around 2009/2010. But then Java libraries and their impossible stack traces got in the way.

I've been waiting for a native Clojure implementation (or on top of Python or the Erlang VM) ever since.

Yep, I'd love something like Clojure with an implementation/tooling like Go's.
There are few abandoned attempts.

Writing from scratch those Java libraries, including a good quality AOT compiler and GC is not something to do as hobby on the weekends.

Don't need all those Java libraries if you've got good FFI with C libraries.

Don't need AOT compilation; if you want performance, just stick with regular Clojure on the JVM.

I'd love to just see a small general-purpose interpreted Clojure (quick start up, small memory footprint, easy access to C libs), even if it lacked concurrency features.

For that I fail to see the point of why not use a Scheme or Common Lisp compiler instead.
Yeah, for native executables, CL and Racket are much further ahead.
Thank you. Though I really like having Clojure's:

* literal syntax for maps, vectors, sets, and regexes

* keywords

* clear separation of functional constructs (`for`, `map`, etc.) vs side-effecting ones (`do`, `doall`, `when`).

* large standard library of built-in functions, with overall fairly nice naming of things.

I've looked at Scheme, but it appears to be missing those things. I think some of them may be provided by srfi's, but upon a quick reading I couldn't make much sense of how to include and use them.

Racket is probably something you should look at. Im not sure it has all these things, but it is also a modern updated Lisp language based on Scheme.
Lumo ( or Planck may fit your requirements, though they lack a C FFI. They're based off ClojureScript/Javascript, and startup way faster than the JVM Clojure. Could probably try the node-ffi library with Lumo.

There's the abandoned ClojureC project ( There's also JVM-to-native compilers like gcj or ExcelsiorJet.

But at the moment, it doesn't seem like there's an established way to do all that.

Hey. My attempted is not abandoned just sleeping :).

The best chance to get it is to extend something that is ClojureScript based. I think you can get pretty close to it.

My implementation was never really targeting production use, but rather exploring some ideas in the VM.

I would love to continue working on it, but I simply do not have time for such a project.

See if you are interested.

I think that you are right that Rust, Swift, etc. have the hype now.

In my mind, this is a product of containerization. Java solved a lot of problems that we faced with deployment. Containers have made deployment even simpler, and suddenly the Java runtime is no longer as valuable as it once was. Furthermore, in a service-oriented architecture we don't really need too much interop with existing code.

I think that Clojure is a fantastic language, and I use it for my side projects as much as I can. But the promises made by Clojure dont sound as sexy as they did several years ago, hence the lack of hype.

I feel like every new language has a honeymoon period. Clojure is still alive and well (and growing bigger consistently) but it doesn't have that new language hype anymore.
Clojure's first stable release was in 2009 so it's either very young or very old, depending on how far you zoom out.

Rust is exciting for use-cases that are very different from Clojure's, and the only thing I can say for Swift in this context is that I prefer it to Javascript, which I in turn prefer to other C-style languages.

I'm currently working on a single-person (but expected to grow) project in Clojure and really appreciate the concurrency and state primitives, the functional standard library, the ecosystem and community of high-quality standard tools and packages, and (while I seldom write them myself) macros, which enable you to you write amazingly readable code. The community has a strong preference to functions over macros, but used judiciously you can get things like Clojure's core.async. So you get the benefits of a Lisp without a lot of the drawbacks commonly pointed out regarding other Lisps. I enjoy it a lot.

Needs a (2011) in the title. Still a very good session, though.
Simple Made Easy is a great introduction to the Rich Hickey Fanclub [1] ;)

Other recommendations for early viewing are "Hammock Driven Development", "The Value of Values" and "The Design of Datomic".


Hickey may be a brilliant software architect, but I'm wondering how high he ranks as a business leader. How is his company Datomic doing? Also in the light of the new database service Cloud Spanner just launched by Google.
Didn't see anything about time-series features in spanner.
I would love to know how using Datomic is vs. rolling your own data-immutability solution via other mechanisms but using off-the-shelf SQL/big-data tools.
Anecdotal, but I've run into a couple of companies currently using Datomic in analytics and ML (with Clojure).
Datomic is very different from the typical database in terms of the operations it supports. I don't think Google Spanner, or the other similar products, are direct competitors.

I don't know much about how they are doing financially though.

You forgot "Are We There Yet?", which blew my mind at the time ("with respect to other code, mutability is an immutable function with a hidden time argument") and which was MY introduction to this fanclub.

Rich Hickey's talk on simplicity is a must watch.

And one of the most useful talks of all time for building organizations is by Ed Catmull (of Pixar)

In a similar vein like the first one, maybe, but with the addition of some physicist's humor if you are in into that kind of thing:
I saw Simple Made Easy live, in person, in Saint Louis (where I live), back in Fall 2011. I remember the experience very well ~ forever changed the trajectory of my personal and professional efforts at software development.

I was so under-exposed to non C-family languages at the time that I asked the guy next to met whether the code used to demo the ideas "was Haskell or something else?" I felt embarrassed at the shocked look on his face; my grand exploration of Clojure (and other functional languages too!) began shortly thereafter. The previous evening, I'd accidentally had dinner with Dr. Gerald Sussman... what a conference, what an experience was Strange Loop 2011!


The Front End Architecture Revolution by David Nolen is one of my all-time favorites, and was probably the biggest single influence on the trajectory of my own development career:
Not sure if unthinkable is the right word:

Simple Made Easy:

David Nolen talks about how immutable structures work:

Objects are Marionettes:

Not sure if unthinkable is the right word

That's exactly what I'm asking about. I'm familiar with Hickey and Nolen.

Jan 29, 2017 · nilliams on Trunk-Based Development
> That seem to go against the definition of simple: easily understood or done; presenting no difficulty.

That's not a great definition of 'simple' to apply to software dev. Simple != easy, because easy is inherently about familiarity. See Rich Hickey's excellent talk on the subject [1].


That talk doesn't relate to the whole discipline of software development though. He's mostly arguing that if you chose ease over simplicity in your programming/code it can heavily effect the output of your work and its long term viability. It's about not introducing complexity in the design and your product.

But this is about the process and workflows of collaboration on code, not the code or the product itself. Some of these concepts certainly apply but just because it is in the realm of software development doesn't mean that particular definition always applies.

Hmm, not quite how I'd see it. You're right to point out different considerations are required for 'process and workflows', but I think Rich's simple/easy definitions still hold up in those situations, and are more useful than munging the two terms together.

So instead I'd say that when it comes to 'process and workflows' easiness becomes more important, because if it's an action you're literally doing everyday, you want that to be easy. In fact you might be willing to write more 'complex' underlying code/infrastructure (as we do when we setup CI) to make the process 'easy'.

Rich Hickey(Creator of Clojure) has talked about this. Type-specific lingo prevents one from applying common patterns of transformation. Check
Jan 03, 2017 · 1 points, 2 comments · submitted by CoolGuySteve
I find myself referencing parts of this talk a lot when talking with my coworkers. In particular, the guardrail and knitted castle analogies are quite elegant.
This is posted every week in one form or the other. A classic talk though.
Coming from the HFT side, I find C++ surpasses C in a lot of ways for optimization work. Mainly you can use integer template arguments and generic functions to abstract all the boilerplate in a way that is more safe than C macros.

For a semi-contrived example, instead of writing a do4Things() and do8Things() to unwind some loops, I can write template<int> doThings() where the int argument is the bound on the loop.

And having things like a universal template<typename Whatever> toString() that operates on enum classes is nice.

The downside is that it's horribly easy to invoke allocations and copy constructors by forgetting an ampersand somewhere, and the std library isn't well suited to avoiding that behavior either. You have to be vigilant on your timings and occasionally callgrind the whole thing.

The other downside is that your colleagues are more likely to "knit a castle" with ridiculous class hierarchies or over-generalization. ( )

I have a friend who makes a living writing CUDA kernels as C++ templates. His job will be safe for decades to come because noone will be able to decipher the code. :)
Yeah, the nice thing about C++ is that you can generally hide highly optimized portions of code behind nice templates or class interfaces. And with templates you can write libraries that let a lot of compile time logic happen to inline a bunch of stuff and not have to resort to virtual methods.

But when it comes to using things like custom allocators, etc. it's a nightmare. Or a lot of the compile time "traits".

Dec 30, 2016 · mindcrash on Why Clojure? (2010)
Typed data was already possible with schema [1], which is now maintained by the plumatic (former prismatic) team. Which also says something about the way Clojure is awesome. Everything is optional, you arent forced to use anything to get to a working solution. Stuart Halloway and Rich Hickey also have some great talks on this subject. If you are interested you might want to check out "Radical Simplicity" [1] by Stuart and "Simple Made Easy" [2] by Rich to see why Clojure wipes the floor with almost any other programming language, especially the likes of C# and Java.

I am not surprised at all Bob Martin loves it. Any principled software engineer would.



> If you're gonna spend many thousands of hours using a language, don't use initial learn-time as the one thing to optimize for!

That reminds me of this wonderful talk by Rich Hickey called Simple Made Easy,

We've been using Pouch in a progressive web app designed to be used on the field in remote locations, and while there was a learning curve in understanding how the replication protocol works, and as highlighted in another comment the way Chrome stores data for a web app - we can't be happier with pouch/couch.

Additionally, moving out of Cloudant and into CouchDB with an openresty based reverse proxy has made things even better, and really fun. This is one of those stacks that feels easy and simple at the same time. (Ref:

Any guidance on moving from Cloudant to CouchDB? Are you hosting it yourself? If so, has the amount of maintenance been more than you expected, or was it mostly setup time and then forget about it?
Yup, hosting it ourselves. Its a peach. There are few things that it doesnt come with out of the box - clustering, Full text search, geoindexing, chained map reduce, auto compaction, index auto-updation. Once thats done, if anything it was more forget about it than Cloudant, which bills on requests / thoroughput. This can catch you out because continuous replications between databases on the same cloudant account are also counted as requests and billed as such. And continuous replication is very chatty. So if you have a particularly creative multi-master setup, like a per user db -> masterdb kind of thing going, this can eat up your thoroughput / push up your bills with no practical benefit.

Its really openresty + couch that does it for me. The idea of writing security / validations / routing etc right into ngnix combines beautifully with the CouchDB way of thinking.

We (Cloudant) recently changed the pricing model to help with this. You can now take a fixed-cost plan that charges based on reserved throughput capacity instead of metered use. This should help with the replication scenario. See

Stefan Kruger, IBM Cloudant Offering Manager

Ah, yeah, you weren't the only one bitten by that. We actually went and changed the Cloudant metering model recently so that you're billed on provisioned throughput rather than total request volume. You get dramatically more predictable billing, with the tradeoff that clients need to handle a 429 Too Many Requests response if the Cloudant instance is under-sized. More here:

Rich Hickey gave one of my favorite talks that I recommend to all programmers no matter which language they code in:

Oct 25, 2016 · noam87 on Happiness is a Boring Stack
I prefer to go with Rich Hickey's definition of "simple" (

That's why I chose Elixir for our product, and am so glad I did; it may be shiny and new, but it's dead simple.

The "boring" familiar choice would have been Ruby / Node, etc.

I think the problem is when people jump on shiny new bandwagons just because of the shiny factor. When instead they should ask: "Does this shiny new technology radically simplify something that is currently complex and is at the core of my application?" (again, going with the above talk's definition of "simple")

Nice article but i think it touches two problems, but then offers a solution to one.

Every program has a code structure. Certain programs have better code structure than others. These are properties independent from the programming language. Javascript evolved from a single entry point, being the [in]famous $.ready() to set behaviors of some html elements, to full blows ES6 single page applications.

It all started as a toy language.

But it simplicity is also its flaw: it enables every human with a not so deep understanding of computer architecture to write a button that changes color on click. The absence of a type system and a solid class paradigm (introduced in ES6) spoiled programmers to pass any object down to any function breaking well known software paradigms: Law of Demeter (, Open/Close Principle ( and the Liskov Substitution Principle (

I'm in the Web space professionally from 15+ years and those are the 3 rules i see JS devs break the most, generating complected code (for more understanding of the term have a look at ), hard to maintain and extend like the example shown in this article.

The advice to build interfaces around data structures, proposed as solution, is no different than the Liskov Substitution Principle.

The other problem the article cites is the event loop.

At the time o $.ready() there was no event loop. Developers were just attaching functions to user events: clicks, hovers, blurs, focus. Just a direct mapping between an element and a function. You can simply come to the conclusion that the trigger and the action to be performed were not loosely coupled, but indeed tight together. Easy, yet not scalable.

Tieing events to the dom structure was another sin opening more questions: should an element that is not interactable fire events? bubble them ? every browser had its own answer to those questions. Things got even more complicated with single page applications which html element in the page can be added and removed. So here comes the event loop, like other well known ui stacks did in the past.

The concept of an event loop is not a novelty, it is indeed bound to the architecture of our computer: clock cycle, interrupts, kernel events. In the case of windows is the well known WPF ( which has, among a lot of other things like any Microsoft product, the concept of a dispatcher that is central to the flux architecture.

In 2015/2016 with React/Flux Javascript and the Web is moving out of puberty, enabling developers to write clean, decoupled, extensible code. Thus no all devs are ready to grasp those architecture that are so obvious in other ecosystems. To cite Poul-Henning Kamp in A generation lost in the bazaar (

"So far they have all failed spectacularly, because the generation of lost dot-com wunderkinder in the bazaar has never seen a cathedral and therefore cannot even imagine why you would want one in the first place, much less what it should look like. It is a sad irony, indeed, that those who most need to read it may find The Design of Design entirely incomprehensible."

my 2 cents

Oct 14, 2016 · andreareina on Taking PHP Seriously
It's a pretty nuanced phrase and difficult to replace. I might make the case that "easy to reason about" <=> "simple" in the sense that Rich Hickey uses it[1] but that doesn't do anything for the verb itself.

The phrase has a high correlation with subjects that are themselves highly correlated with smug proponents; functional programming is one of the greater ones of these.

Personally I like the phrase. Then again I self-identify as a (non-smug) SmugLispWeenie[2] so of course I like it.

[1] [2]

Good luck my good friend with having tied your professional fortune to a small company that you are not affiliated with. This is not politics, this is simply dangerous and I do feel that way every time I see someone with a copy of Sublime. Since I'm a lecturer, I see this issue of lock-in and easy vs. simple/powerful a lot. I'm not taking this lightly, I want the best for my fellow professionals that are just too young to know better. You personally, might be older and more experienced, and I do not have a grudge with your opinion. I was simply stating mine for the reasons given without trying to step on your foot.

As for simplicity, there is nothing that I have seen in any editor that is simpler than VIM modal editing or a LISP-machine to do everything. Having a shiny GUI is inherently not simple, but complex.

If you are not familiar with yhe original meanings of these terms, there's a qualified speaker:

That a tool is not easy in the beginning is ultimately irrelevant if it is simple. That is, if you've got enough time to master and profit from it. Which is what every professional software engineer has.

VIM's modal editing isn't remotely simple. There is a huge language to learn, and most keys do not form useful patterns which are easy to remember.

The problem with "you need to invest the time, trust me", I'd the same argument can be used for vim, Emacs, sublime, atom, vscode, eclipse, intellij, and any other editor. I can't invest the time in all of them to become expert.

The difference with Emacs and vim is that they require a sizable time investment just to become competent, as they refuse to fit into the OSes they are running in (in the case of windows and Mac).

Well, to make a long story and potential flame war short: My original post was not about VIM or Emacs. It was stating happiness about an editor (Lime) that tries to be easy to get started, yet is open source.
> VIM's modal editing isn't remotely simple.

While it is highly non-intuitive at first, one can learn the basics in a day or two, from then on it's mostly transferring what you learn to muscle memory. I suppose one can do more advanced stuff in vim that is more complex to learn, but the basics are pretty easy. (Full disclosure: I used to to use vim for a couple of years but switched to emacs about ten years ago. I still use vi for quickly editing a config file on a regular basis.)

> As for simplicity, there is nothing that I have seen in any editor that is simpler than VIM modal editing or a LISP-machine to do everything. Having a shiny GUI is inherently not simple, but complex.

I'm a Vim user. But it's exactly this kind of thing that pushes newbies away. Yes, Vim is "simple" conceptually. But in this real world we live in, Vim often makes things more complicated. It's one more thing to learn - and a weird one.

On top of that, Vim's architecture is ancient and not everything has aged gracefully.

I concur. That's why the only good thing that I said about VIM is the modal editing which is the most pleasing and efficient mode of editing text that I have ever seen.

However, all your points are very correct! That is why I have switched to Emacs where I can still have full VIM modal editing with the other issues you mentioned not being an issue. Emacs has a mode called 'evil' which fully emulates VIM.

Best of both worlds^^

> Good luck my good friend with having tied your professional fortune to a small company that you are not affiliated with

You say that as if Sublime HQ Pty Ltd were to suddenly go out of business, the editor is immediately and completely useless.

This is obviously not the case, in any way, shape or form. It could go out of business tomorrow, and Sublime Text would be perfectly usable (and extensible) until the OSes changed in a way which stopped it from working.

Being open source provides no more guarantee of future development than being closed source does.

That's just it - I've done nothing of the sort. Sublime has no "unique" features that don't have equivalents in form and function on other editors.

So this "lock in" simply does not exist in the case of Sublime.

That naturally leads to the question of "Why it, and not a free alternative?"

I said it elsewhere in this thread, but the main reason I'm on Sublime and not Atom or an equivalent competitor is speed. It's fast, it's developed conservatively, and watching other editors hitch and stutter when they open large files or scrolling or start up or process syntax highlighting probably doesn't have much time impact on my productivity, it does cause a great deal of annoyance, hence stress, which probably does impact productivity in some way.

The other reason is that it lacks in bloat, which to me, means it lacks a ton of features I will never use, something I cannot say about Vim (macros, registers, hundreds of ancillary commands) or Emacs (an entire Lisp vm) and their associated complexity. However, it can be trivially extended with Python, which means any functionality it lacks has likely been worked around by someone in the community.

On top of all that, the author has made indication that he'd rather see the editor go open source than be abandoned[1], but I don't share the common belief that no updates for years means "abandoned" either.


> Good luck my good friend with having tied your professional fortune to a small company that you are not affiliated with. This is not politics, this is simply dangerous and I do feel that way every time I see someone with a copy of Sublime.

This is just wild exaggeration. It doesn't take more than a few weeks or so to become reasonably productive with another text editor. We like to think that the many plugins and shortcuts we build up over years of using an editor adds like 100% speed increases, while at best its increments of a few fractions of a percent.

And most of us developers are probably familiar with at least two editors anyway. Personally I'm intimately familiar and productive with both emacs and sublime, but still prefer sublime. If sublime were to suddenly close down shop and not release their sources, I could switch on a dime.

> As for simplicity, there is nothing that I have seen in any editor that is simpler than VIM modal editing or a LISP-machine to do everything.

In theory, yes. In practice, I've found it much more complex to work with emacs plugins than with Sublime plugins. My conclusion is that overly simple languages like LISP just transfer complexity from the language itself to the code that you're writing.

I'm sure some people find a kind of simplicity there that they like, but people are different.

VIM modal editing is also a thing that may be nice to some people, but personally I find modes to be annoying. It's this state that I always have to keep in sync between me and the editor, and I don't like it. I get the point and the benefits, and I've tried several times, but it just doesn't click for me. So I don't experience that as a simplicity.

> Having a shiny GUI is inherently not simple, but complex.

I wouldn't call Sublimes GUI shiny. In fact it's quite minimalistic. Even more so than Emacs' GUI if you ask me, especially once you've added all the plugins to match functionality.

Again, it's something about the transfer of complexity. In theory, in its base implementation, Emacs is simpler because it makes few assumptions. But this transfers a lot of complexity to plug-in writers, because you get conventions instead, which often causes problems when plug-ins interact.

>My conclusion is that overly simple languages like LISP just transfer complexity from the language itself to the code that you're writing.

Can you give some examples please? Python is even simpler than CL however I don't regard it to be transferring any complexity to the programmer.

Oct 05, 2016 · Eupolemos on Not OK, Google
I fear you will be unable to recognize when that burger was your choice and when it was a reaction. You probably won't notice. And that is harmless.

I also fear you will be unable to notice in which areas of life and information the distinction between choice and reaction is harmless and which it isn't.

Of course, I'm not talking about "You" you, but just people. Me as well. I feel we are widening the field of unconscious decisions and I see that as inherently bad - in my fellow humans as well.

You could say that Plato wanted us to make easy things simple (link for distinction:

I believe this to be a move in the opposite direction. We should have a care.

To my mind, leading a simple life is enjoying a burger at a restaurant/bar I frequent already. Simplicity _is_ accepting that Google algorithmically noticed a trend and just helped me do things I already do.
Before replying, you could at least have made an effort to understand what I meant with the distinction between simple and easy.

If you do not care what I say, why even reply?

Sorry, who's mind? It sound like you are renting it out.
Do you never use digital tools to outsource mental effort? Seems like a similar argument could be made for using a calculator.
Calculators provide you a completely fair assistance with your query. There is zero bias in a calculator. If you ask it what two plus two is, you're going to get four.

Google is designed to sell ads, and subtly influence your behavior towards the most profitable results. Please do not confuse a fact-based tool with an ad generator.

> subtly influence your behavior towards the most profitable results

This is the very common theory that a company will (shadily) try to offer you a worse product to make more profit. It fails to account for competing companies that would jump on that opportunity to offer their better product, and get the market share.

But what's funny here is that the suggested alternative is to not get any product at all. As in: "Poor OP, didn't realize that it wasn't really him who was enjoying that burger he was enjoying."

"Worse" is often subjective. And the problem is often just the removal of the possibility of a better product to take hold. For example, Google prioritizes Google services. It gets you on as many Google services as possible. Let's use, say, that it pushes you towards Play Music when you search for songs.

Maybe Play Music is the best thing. Maybe it is not. Neither of us can answer that. But if a definitively better product comes along it will have no way to make a foothold because Google is still pushing everyone to their own product, from their other product (Search), and even when people try your product, if they use Google's other products, they'll tend to stick to other Google products.

Honestly, the worst problem with companies like Google is vertical integration. The ability to provide a wide product line where you integrate best with other products your own company makes has an incredibly chilling effect on competition, and therefore, innovation.

And if your theory that companies prioritizing results for profit would lose to companies that always prefer the best products, why is DuckDuckGo still in what... fourth or fifth place?

> And if your theory that companies prioritizing results for profit would lose to companies that always prefer the best products, why is DuckDuckGo still in what... fourth or fifth place?

You'd need to argue that DuckDuckGo's search results are better; I don't think they are. That's what made Google first among many competing search engines, before there was even a clear business model in it. Today the incentive to outperform is bigger.

If a product Y definitely better than X comes along, and only Google Search fails to rank it higher, people will start thinking "I rather search on Bing too, as it finds better products in this category".

Presumptuous much? Comments like yours are what makes discussions like this so difficult, and so much less interesting.
Yes, traps are usually designed so that it is simple to get into them. It is not that cheese is bad, it is that you are trapped.
Are you comparing something designed to kill a rat with something designed to help me go to a burger place I like, or leave on time for work?
Yes. How does Google make money of this service again?
By having burger places pay money to get on the list of places it helpfully gives us when we want to eat a tasty burger. I still get my tasty burger.
Yes, because when one has already decided that feature $FOO is a trap, any further discussion is likely to be limited to describing how "yes, just like a trap is designed is the thing we're talking about" whether the analogy is apt or not. Something something supporting a narrative.
That's the thing though. I reject the notion that you ever actually make a choice. I would posit that 100% of the actions you take are simply the deterministic reactions when the current world state is filtered through your brain. Then, after the fact, your brain gets busy inventing a reason that you took a particular action and calls it a "choice" when really you were just going to do what you were going to do anyway.

"I ordered this burger because I was hungry and it tastes good" vs "I ordered this burger because Google was able to successfully predict that I would be receptive to having burgers, or the idea of burgers, placed in my environment"

In effect you argument is that we don't have free will, right?

I wonder what is then causing inefficiency when we read restaurant's menu and can't decide what we will have.

I'm with those who think we make choices and decisions far less often than we think, but that we still do make them.

i am no longer intrigued by the privacy discussion but the actual possibility that we are just consciousnesses controlled by the google hivemind.

this is like absolutely full on plugged into the matrix world. and we're living right in it.

these guys are like the ones who've taken the red pill, and gone on to find out how far the rabbit hole is going.

(edit: i'm even more intrigued by the possibility that the future is not just the matrix singularity, but an oligopoly of several large singularities, all fighting to plug us in)

Sure, but philosophical musings on the nature of free will aside, there's a practical worry about the amount of power a private company has over your actions. I'd rather be ordering burgers because they taste good than because a company wanted me too--I expect this will lead to greater happiness for me in the long run.
Yes, but only because your happiness metric maximizes when you exercise your freedom of choice.

Other people's happiness metrics work differently, and all popular web services are popular precisely because they satisfy the unconscious desires of the majority of people.

I think for quite a large number of people, allowing AIs to make decisions for them will probably be better for them.
Imagine some day your doctor advises to cut back on burgers and alcohol. Is Google going to incorporate that advice in its bar recommendations?
Why not? As long as you're clicking their ads, they'll make money regardless of whether you're buying a burger or a salad.
Is it Google's responsibility to? I would say no. If algorithms detect that an individual is going to a bar every Monday and Thursday night, and then starts providing information about said bar on Monday and Thursday nights I don't see the problem.

But I think it would be a problem if every Monday and Thursday night Google Now started providing information about AA meetings in the area, instead of bar information. It's up to the user to make the choice, Google Now just detects trends and then displays information based on those trends.

I go to the gym every Monday, Tuesday, Thursday, and Friday morning. And each of those mornings Google Now tells me how many minutes it will take me to get to the gym from my current location. Should Google Now start giving me directions to the nearest breakfast place instead? No, not unless that starts becoming my pattern.

It may not be their responsibility (although if it had that information it would be the morally correct choice). However, regardless of the responsibility -- the CEO of the company saying "we're going to make your life better!" by an AI pushing products is almost certainly not going to make your life better.

> Should Google Now start giving me directions to the nearest breakfast place instead?

That may depend on how much Waffle House pays for advertising, and that is the problem.

If you're trying to change your lifestyle, it's more difficult when you have a bad friend constantly enabling the behavior you're trying to cease.

Google may not have a responsibility to be a good friend, but personally I'd prefer not to have a bad friend always following me around, thus I'm a little less excited about this feature.

You can just tell it to stop. It's not hard.
I think many would rather tell it when to start instead. What's hard about telling it to stop is when you can't tell it's started because it's something more nuanced than the obvious diet plan.
That rather depends on the objectives of the AI.

If you replace "AI" with "marketing" would you still make that statement?

If you replace "ai" with "your spouse" would that change be as intellectually useless?
don't you think thats a pretty severe statement wrt to free will and agency? if i'm just a consumer wired up to a machine thats deciding whats best for me (even with the best of intentions), doesn't that make me less human?

should I just be a actor playing through a set itinerary of vacations and movies and burgers and relationships? maybe you think its that way already, except less perfect than it might be, but thats a pretty frightening notion to me.

The same argument was historically made to justify slavery.
And to justify the continued existence of the electoral college.
When the AIs are working in service of corporations this seems incredibly unlikely.

We already see what happens when peoples decision making is coloured by mass media advertising. An obese population trapped by debts taken out to fuel consumption.

It is in other peoples best interests for you to work like a slave, be addicted to unhealthy habits & run up vast debts in order to buy their products.

We keep allowing those with power to distort the markets gaining themselves more money and more power at the expense of the little guy. I don't see any reason why AI in the service of the powerful will do anything but accelerate that.

Given all the other points in life where, despite my awareness, I don't have much choice, how is an AI just directing me really any different?

My culture, education and skills limit what work I can do.

Our culture places limits on a vast number of experiences. On the road and the only thing is fast food? Welp, eating fast food. Live somewhere that only has one grocery store or cable provider?

I don't really see AI in the form Google is peddling as really all that much different. We're just 'more aware' that the world around us is really guiding us.

I may be somewhere new, and can only see the immediate surroundings without a lot of exploring. And let's be real, in the US, most cities are the same when it comes to restaurants/hotels and such. There are differences in culture but we don't usually see them if we're just visiting. Not in a way that matters.

Google will let me know that the things I prefer back home? there are equivalents nearby.

Fencing ourselves in is what we do. Who knows, perhaps a digital assistant would help us stick to our personal goals and decisions better. Rather than just having to accept what's there.

Almost all decisions are unconscious decisions. whether or not Google is involved. We usually rationalize our reasoning after the decision is made.
> I feel we are widening the field of unconscious decisions and I see that as inherently bad

I'm curious why you think this is bad. I don't necessarily think it is good but I also don't necessarily think it is actually happening

Which news-sources do you use?

Which news-sources are you going to learn about?

Which news-sources are you for some reason very unlikely to encounter?

Now apply a real-time AI filter-bubble, able to also include government policies in its decision-making, onto those questions.

I believe the most important thing in life is thinking. I believe a key element of thinking is looking at "easy stuff", the stuff we just live with every day and don't think about, and for some reason be forced to think about it and make it simple.

Take the Snowden-leak. We lived a nice life being the good guys and that kind of surveillance was publicly thought of as conspiracy theories. Suddenly we were forced to look at what was going on. How much of it are we okay with? On the grounds of what principles and tradeoffs? This is all very unpleasant, but we're all better off for facing those questions and work towards new principles. We take a chaotic gruel of cons and pros, and try to hammer them into a few simple principles our societies may function by. For instance, the separation of power into 3 has served us well.

I fear that we end up in a world where raising such unpleasant questions becomes almost impossible - and we'll never even notice. Not because of AI (I believe AI to be inevitable and fascinating) but because of the way AI is used.

Living a life assisted by an AI, made and paid for by someone else, seems like the epitome of naivete to me.

> I fear you will be unable to recognize when that burger was your choice and when it was a reaction.

Maybe the illusion is that it was a choice . . .

Not far from the mark. People have quite different behaviors when asked "what do you want?" vs a constant stream of "do you want X?" questions.
I'm sorry but that just sounds like blind fear mongering. What you're saying is vague and doesn't really mean much.

It's like saying we shouldn't use prescription glasses, or medication, or cars, because it's not "us".

Humans invent all these tools and systems to improve and optimize our life. Make our vision better. Make our health better. Make us move around faster. In the case of AI, make us do perform certain things more efficiently.

Imagine it wasn't an actually computer. Imagine it was a personal secretary you had that gave you the EXACT same information. Gave you your flight information, turned on the light when you asked, gave you the weather and your schedule. Would you think that was wrong? That this isn't "you"? No, it's just optimizing your life, but now available for a wider population rather than rich people.

What he's saying is that this is not "humanity inventing something to make life better". It's a company inventing something to make money.

And it's not a simple product like glasses where you pay with money and then they improve your vision. It's a product which goes far beyond your understanding and for which you don't pay money.

Google isn't interested in making your life better. What they are interested in is getting you to believe that they want to make your life better and to then recommend going to that bar, because the bar owner has given Google money to advertise for the bar.

Yes, you might actually like that bar, but Google isn't going to recommend going there in intervals which are beneficial to you. They'd rather have you go there a few too many times. Because that's what makes them money. It's not improving your life, which makes them money. Their AI will always work against you, whenever it can without you noticing.

Imagine that you were trying to quit smoking and your electronic secretary kept updating you on the cheapest place to find your favorite cigarettes? With no way to tell it not to do that.
So your issue is your secretary doing it's job poorly?

First, there is a way to tell it to not do that. With Google Now, you simply tap the menu and say "No more notification like this". With the assistant, you will probably be able to ask directly.

Second, let's be honest, humans fail pretty often too, so that's just a weak argument.

Lastly, I think it's unfair to dismiss a new technology just because it could maybe fail, without having even tried it.

> So your issue is your secretary doing it's job poorly?

I think the real issue is the casual deception which you just fell for: It isn't "your" electronic secretary, and the thing it just did might actually be a "good job" from the perspective of those who control it.

How about if the system is working exceptionally well, you're a depressed person, and the next ad you see is auctioned off between a therapist, a pharma company, and a noose supply store in the 100ms it takes to render your MyFaceGram profile?

The awful success cases are far more interesting than the awful failure cases.

I have no problem with ads for therapists or pharma companies competing for advertising space in front of me because they have algorithmically determined that I am a qualified lead. That actually sounds great from a mental health perspective.

Your noose example is pretty contrived, however.

Obviously the first two aren't the problematic ones. The issue is that an algorithm wouldn't know what distinguishes those from the third, obviously.

How about sleeping pills? Opiates? Local extortionist cult?

I think that algorithms, and AI specifically, are perfectly able learn what distinguishes those. Maybe even better than someone who might not be in their best state of mind.
The handwaviness is telling. Why would an algorithm or its creators even care about the difference? The highest bidder is the highest bidder.
Because the whole of Google's ad business stands on people wanting to click on the ads shown, and buy the products offered through them. That's why they spend resources on detecting misleading or fraudulent ads, which by your reasoning they wouldn't care about as long as they paid. PR is very important for this business to be sustainable: If the goal was for every user to click through one ad, and then never again, that might not even pay one engineer's salary.
What's misleading or fraudulent about those ads? Maybe you mean "morally reprehensible," in which case I ask where you draw the line between the morally reprehensible (auctioning off the method of suicide to a depressed person) and the morally questionable (say, auctioning off the final bankrupting car purchase to a financially irresponsible person)?
Detecting misleading and fraudulent ads is just an example of things they wouldn't spend resources on, if following your reasoning of "short-term money is the only thing they care about."

There's not only the "morally reprehensible" metric ("Don't be evil"); there's also the "absolute PR catastrophe" metric that printing such an ad for a rope would mean.

I think you misunderstand me by a large margin.

I'm not saying we shouldn't use AIs. We should, however, think about how we use them.

To build on your example, what are the dangers of having a personal secretary on the payroll of anyone but you?

What I am expecting from this is a super devious filter bubble - because that's how you make money. Google's old slogan "Don't be evil" is long gone. "For a greater good" might be more on point.

>In the case of AI, make us do perform certain things more efficiently.

What does the Google Assistant help me do more efficiently? In all honesty, I can't figure it out. I don't need or want a secretary, and I can do written planning for myself.

I need less paperwork and fewer web forms and identities, but the Google Assistant only promises more of that crap.

I'm never buying one. It's a sacrifice of privacy for zero to marginal gains in convenience.

If you can't come up with uses for it, you weren't its target audience in the first place.
Sure, but then I'm not sure anyone I know is the target audience. Not that many people really need or want personal secretaries in the first place, let alone want to make financial and privacy sacrifices so they can have a mentally retarded AI pseudo-secretary.

Most people get through their daily lives just fine on their own.

Ignoring your derisive tone, the statement "most people get through their daily lives just fine without it" applies to every new technology. Yet here we are, typing away on the internet.
"Imagine it was a personal secretary you had that gave you the EXACT same information. Gave you your flight information, turned on the light when you asked, gave you the weather and your schedule."

In your metaphor, you are implicitly paying the secretary, so the secretary is incentivized to maintain your interests.

How much have you paid Google for its free services?

Your metaphor is inapplicable. You don't have a secretary telling you these things; you have a salesman trying to sell you things, and the salesman is getting smarter every day while you aren't. Not the same thing at all.

Google earns most of their money through ads.
Yes. Google is selling you, to advertisers, quite literally.

When you aren't paying anything for something of value, YOU are the product.

> Google is selling you, to advertisers, quite literally.

No, that would be slavery, which is illegal.

Google is selling advertising space on various channels that you provide in exchange for Google services to advertisers.

> When you aren't paying anything for something of value, YOU are the product.

No, when you aren't paying money for something of value, you are probably paying something else of value for it; often, something that the person with which you are trading is then selling for money, making you a supplier of an input to the good or service they are selling for money.

That's why I called them a salesman. They sell things. Their interests are not simply your own.

It seems to be a theme here today... a company can't serve both advertisers and customers. In the end, one of them has to win, and given the monetary flows, it's not even remotely a contest which it will be.

They don't sell things. They forward you towards people who do sell things which you may be interested with. You're free to ignore it, and if you're not interested in what they're showing you, that means they failed at their job.

It's funny how bad of a stigma ads have gotten, but at the core, if you think of it, it's not necessarily a bad thing. Think of a friend recommending you a restaurant, a new game to play, a movie to go watch. In that case you'll be super interested, but now if this AI who probably knows your taste better than your friend suggests you something, you are instantly turned off and annoyed.

I think the root cause of this is that there is so much mediocre ads out there that ruin it for all. Your mind just blindly blocks all ads now.

Rich Hickey "Simple Made Easy"
+1 for Hickey's talks. The Changelog compiled a selection of the best:
His "Hammock Driven Development" talk is good too:

A couple of comments there by me.

Simple Made Easy is one of those talks that never gets old to me. Never heard anyone talk about the power of reducing complexity in such a clear way.

Here's the link for those who are interested.

This is my favorite. I also really like hammock-driven development (
Simple Made Easy by Rich Hickey

I would say all of "Rich Hickey's Greatest Hits":

As Rich Hickey argues[1], 'simple' can be an objective statement. Though I agree that without an explanation of what exactly makes this library 'simple', the better word may be 'easy'.


As a side note in his talk "Simple Made Easy" (, around minute 42) Rich Hickey mentions, that conditional statements are complex, because they spread (business-)logic throughout the program.

As a simpler (in the Hickey-sense) alternative, he lists rule systems and logic programming. For example, keeping parts of the business logic ("What do we consider an 'active' user?", "When do we notify a user?", etc...) as datalog expressions, maybe even storing them in a database, specifies them all in a single place. This helps to ensure consistency throughout the program. One could even give access to these specifications to a client, who can then customise the application directly in logic, instead of chasing throughout the whole code base.

Basically everyone involved agrees on a common language of predicates explicitly, instead of informally in database queries, UI, application code, etc...

But Hickey also notes that this thinking is pretty "cutting-edge" and probably not yet terribly practical.

It can work. My current company uses a rule system to represent most of our business logic since it is so dynamic. The downside is that we have to rebuild the entire graph into memory (times the number of threads, times the number of app servers) every time anything changes (which is constant).

Facebook wrote about rebuilding a similar system in Haskell that only changes memory incrementally, so it's definitely possible to do better.

Interesting note, thank you. Are you referring to "Sigma" ?
That's the one.
May 23, 2016 · jimbokun on My time with Rails is up
The point is not about "hand-coding" at all.

It's about reading the code, and having a good mental model of what is happening. This is the point Rich Hickey tried to drive home with his talk "Simple Made Easy".

If you are a developer and haven't watched this yet, you really, really should. Very important distinction to keep in mind any time you are writing software.

Haven't read the Active Record source code, but would be interesting to find out where it falls on the "Simple vs. Easy" continuum.

Simple Made Easy is a really great talk.
If you haven't already done so, listen to this talk by Rich Hickey (the creator of Clojure). This should clear it up for you.
Thanks! Added link from the post to the lecture.
Feb 05, 2016 · wellpast on The Wrong Abstraction
Here's a very objective and powerful way to measure complexity: dependencies and volatility.

Otherwise we're all saying "complex" but not being clear and likely meaning different things.

For example, a lot of people believe that "not easy" = "complex" but as Rich Hickey articulates that's a counterproductive way to think of complexity. (See

"dependencies and volatility" But what does this even mean? I'm okay with using Rich Hickey's definitions. But I don't recall that in Rich's talk.
If your system's design results in your stable components depend on the non-stable (volatile) components, your system is complex. This is because volatile components change often and these changes ripple to your stable components effectively rendering them volatile. Now the whole system becomes volatile, and the changes to it become very hard to reason about - hence complex.

Avoiding this problem has been captured, among others, by the Stable Dependencies Principle (, which states that the dependencies should be in the direction of the stability. A related one is the Stable Abstractions Principle (, which states that components should be as abstract as they are stable.

In a typical jQuery/Backbone kind of app, you've got some data in something like Backbone models and you've got state stored in the DOM. Keeping those two in sync bring complexity in. The React model is simpler (in the non-intertwined sense... see Simple Made Easy[1]) in that you have data in one place, a function that transforms the data to UI and the browser DOM is managed automatically from that.

It's not perfect but it reduces complexity.


Dec 17, 2015 · edem on Why does programming suck?
This reminds me of the talk given by Rich Hickey named "Simple made Easy":

The "Clojurians don't like testing" meme probably has more to do with Rich Hickey's famous "guard rail programming" [1] comment than anything else. Of course, even at the time, the joke within the community was, "Yes, Rich Hickey doesn't need to write do!"

[1] (15:30)

> The "Clojurians don't like testing" meme probably has more to do with Rich Hickey's famous "guard rail programming" [1] comment than anything else.

And Rich Hickey isn't against testing. He was having a jab at test driven design.

Hi there, thanks for the comment. Author here.

No, poor technical user is not the entire premise. What I was trying to convey is to give the reader a chance to reevaluate the decision to send parameters that way, rather than trying to accept it as it is. It is hard for experienced people to think that way because they have become accustomed, but most of the time, in programming, we don't realize simpler solutions are possible. It helps to reevaluate from the eyes of a beginner.

Rich Hickey has a great talk on this called Simple Made Easy:

> web application

The blog post mentions getting an article, but basically the web applications nowadays move complexity to the client and see the server as a single API. Having that as a single URL is the natural derivation of that. Any request queries or mutations are sent to that URL.

I have updated the demo link so that it now starts with a real query, rather than an empty page. See an example at:

> we don't realize simpler solutions are possible

I'm not seeing a "simpler solution" - your URL is far more complex, and is probably even harder to parse by people that have learned how URLs work. Making non-technical people learn yet another new way to do things isn't helping.

Also, at some point, you're just going to have a complicated interface. T

> move complexity to the client

It's not your computer, so you don't get to decide how the client handles the page. If you want your content to be read, try actually sending it.

Note that this is a statement of fact, not an opinion about how I wish computers worked. You do not know what the client is doing when it renders a page (adblocking is a common example), so moving complexity to the client unnecessarily is risky. So far I'm still only seeing a search interface, which is (by definition) purely server side.

> sent to that URL

Ok, I think I get what you're excited about: you're reinventing #respond_to/#respond_with[1], so the URL can be reused for different mime-types.


> rather than an empty page.

(by the way - curl complains about that URL. Something about bracket? It may be some advanced feature of curl? No matter, wget is fine)

    $ wget -O /tmp/page.html ' ... %0A}'
    $ </tmp/page.html sed -ne '/<body>/,/<\/body>/ p' | sed -e '/<script>/,/<\/script>/ d'
It's still an empty page.
Try this:

    curl '' \
      -H 'content-type: application/json' \
      --data-binary '{"query":"{ allFilms(first: 3) {    films {   title, director  } }}"}'

The query param is just for easy sharing online when you build a query.
> The query param is just for easy sharing online when you build a query.

Gee, if only there were a way to encode that data into the URL itself without embedding an almost-JSON document! Someone should invent something like that.

Get outta here, that's nuts.
I strongly disagree with this notion of "simplicity" as being attributable to scarcity of language features. Some of the languages that I felt were the easiest to use had quite a number of language features, but had simple semantics. I think Rich Hickey nailed this in his "Simple Made Easy"[1] talk. Complexity is not about additivity, it's about entanglement.


How do you have a large set of language features with them not interacting?

In Java, serialization and generics interact with practically everything.

In C++, RAII interacts with exceptions, which is the point but isn't exactly pleasant.

> How do you have a large set of language features with them not interacting?

The ability to write interesting programs in a language comes from the interaction between its features. The real problem is features that interact in unpleasant ways, which almost always results from a lack of foresight on the language designer's part.

> In C++, RAII interacts with exceptions, which is the point but isn't exactly pleasant.

The interaction between control effects (of which exceptions are a particular case) and substructural types (of which C++'s RAII is a very broken particular case) is certainly nontrivial [0], but this doesn't mean we should give up on either feature. Control effects make it easier to design and implement extensible programs. Substructural types allow you to safely manipulate ephemeral resources, such as file handles, database connections or GUI objects.


Nice phrasing, unpleasant was the feeling I was going for.

Sometimes I wonder about giving up.

> The interaction between control effects (of which exceptions are a particular case) and substructural types (of which C++'s RAII is a very broken particular case) is certainly nontrivial

A nitpick, but what constitutes an effect is rather arbitrary. An effect in the PFP sense is not an operational definition (other than IO) but a linguistic one. This is why I think that handling errors well, handling mutation well and handling IO well are three completely different problems that are only accidentally bundled into one by PFP for no cognitive/empirical reason other than that the lambda calculus happens to be equally challenged by all three.

There is a fourth effect, which is just as operational as IO (and thus a "truer" effect than errors or mutation) and is often the most interesting, yet it happens to be the one that baffles PFP/LC most: the passage of time. This is why there are usually two ways to sleep in PFP languages, one considered an effect, and the other is not (but happens to be much more operationally disruptive, and thus a stronger "effect").

I was talking only about control effects, not I/O or mutation. Control effects are basically stylized uses of continuations, with less insanity involved.
I understand. I just said that classifying non-linear transfer of control (whether exceptions or proper continuation) as an effect at all is quite arbitrary, and is just a common usage in the PFP world.

Of course, substructural types are also a language concept (that does indeed interact badly with non-local jumps), which is why I said it was a nitpick about the use of the word "effect".

> I just said that classifying non-linear transfer of control (whether exceptions or proper continuation) as an effect at all is quite arbitrary, and is just a common usage in the PFP world.

What exactly makes it arbitrary? It's pretty sensible, even if you don't have substructural types.

> Of course, substructural types are also a language concept (that does indeed interact badly with non-local jumps)

Control effects and substructural types don't interact “badly”. They just require care if you want them together. If you desugar control effects into delimited continuations (that is, normal higher-order functions), it becomes clear as daylight how to correctly handle their interaction with substructural types.

> What exactly makes it arbitrary?

The word effect in the PFP world denotes anything that a language-level function does which may affect other functions and is not an argument or a return parameter. That definition is not valid outside of PFP/LC, because it defines as effects as things that are indistinguishable from non-effects in other models of computation. E.g. it calls assignments to certain memory cells "effects" while assignments to other memory cells non-effects.

Again, my (very minor) point is that the word "effect" as you use it simply denotes a PFP linguistic concept rather than an essential computational thing. The only reason I mention it is that the word "effect" has a connotation of something that's real and measurable beyond the language. That's true for IO and time (computational complexity, which, interestingly, is not generally considered an effect in PFP), but not true for jumps (or continuations) and mutation.

> delimited continuations (that is, normal higher-order functions)

Again, you are assuming PFP nomenclature. Delimited continuations do not require language-level functions at all, and higher-order functions can be defined in terms of delimited continuations just as the opposite is true. Delimited continuations are no more higher-order functions than higher-order functions (or monads, rather) are delimited continuations. PFP is not the only way to look at abstractions and not the only fundamental nomenclature.

Purity can be defined very nicely against the arrows in a compositional semantics of a language and then effects follow as reasons for impurity.

This is absolutely just a choice. It all ends up depending upon how you define equality of arrows. You could probably even get weirder notions of purity if you relax equality to a higher-dimensional one.

So, it's of course arbitrary in the sense that you can just pick whatever semantics you like and then ask whether or not purity makes much sense there. You point out that "passage of time" is an impurity often ignored and this is, of course, true since we're talking (implicitly) about "Haskell purity" which is built off something like an arm-wavey Bi-CCC value semantics.

A much more foundational difference of opinion about purity arises from whether or not you allow termination.

I'd be interested to see a semantics where setting mutable stores is sufficiently ignored by the choice of equality as to be considered a non-effect. I'm not sure what it would look like, though.

I don't agree with pron overall, but he does have a point. Termination and algorithmic complexity do matter, and the techniques Haskell programmers advocate for reasoning about programs have a tendency to sweep theese concerns under the rug. This is in part why I've switched to Standard ML, in spite of its annoyances: No purity, higher kinds, first-class existentials or polymorphic recursion. And no mature library ecosystem. But I get a sane cost model for calculating the time complexity of algorithms. And, when I need laziness, I can carefully control how much laziness I want. Doing the converse in Haskell is much harder, and you get no help whatsoever from the type system.

As an example, consider the humble cons list type constructor. Looks like the free monoid, right? Well, wrong. The free monoid is a type constructor of finite sequences, and Haskell lists are potentially infinite. But even if we consider only finite lists, as in Standard ML or Scheme, the problem remains that, while list concatenation is associative, it's much less efficient when used left-associatively than when used right-associatively. The entire point to identifying a monoid structure is that it gives you the freedom to reassociate the binary operation however you want. If using this “freedom” will utterly destroy your program's performance, then you probably won't want to use this freedom much - or at least I know I wouldn't. So, personally, I wouldn't provide a Monoid instance for cons lists. Instead, I would provide a Monoid instance for catenable lists. [0]

By the way, this observation was made by Stepanov long ago: “That is the fundamental point: algorithms are defined on algebraic structures.” [1] This is the part Haskellers acknowledge. Stepanov then continues: “It took me another couple of years to realize that you have to extend the notion of structure by adding complexity requirements to regular axioms.” [1]

Of course, none of this justifies pron's suspicion of linguistic models of computation.



> Of course, none of this justifies pron's suspicion of linguistic models of computation.

Of course. :)

But my view stems from the following belief that finally brings us back to your original point and my original response: there can be no (classical) mathematical justification to what you call linguistic models of computation because computation is not (classical) math, as it does not preserve equality under substitution. The implication I draw from this is not quite the one you may attribute to me such as an overall suspicion, complete rejection or dismissal of those models, but the recognition that their entire justification is not mathematical but pragmatic, and that means that the very same (practical) reasons that might make us adopt the (leaky) abstraction of those models, might lead us to adopt (or even prefer) other models that are justified by pragmatism alone -- such as empirical results showing a certain "affinity" to human cognition -- even if they don't try to abstract computation as classical math.

> because computation is not (classical) math

Of course, computation is more foundational. It's mathematics that's just applied computation.

> as it does not preserve equality under substitution

You just need to stop using broken models.

> but the recognition that their entire justification is not mathematical but pragmatic

I don't see a distinction. To me, nothing is more pragmatic to use than a reliable mathematical model.

> the (leaky) abstraction of those models

Other than the finiteness of real computers, what else is leaky? Mind you, abstracting over the finiteness of the computer is an idea that even... uh... “less mathematically gifted” languages (such as Java) acknowledge as good.

> such as empirical results showing a certain "affinity" to human cognition

Experience shows that humans are incapable of understanding computation at all. But computation is here to stay, so the best we can do is rise to the challenge. Denying the nature of computation is denying reality itself.

> You just need to stop using broken models.

No computation preserves equality under substitution. If your model assumes that equality, it is a useful, but leaky abstraction.

> Other than the finiteness of real computers, what else is leaky?

The assumption of equality between 2 + 2 and 4, which is true in classical math but false in computation (if 2+2 were equal to 4, then there would be no such thing as computation, whose entire work is to get from 2 + 2 to 4; also, getting from 2+2 to 4 does not imply the ability to get from 4 to 2+2).

> Experience shows that humans are incapable of understanding computation at all.

Experience shows that humans are capable of creating very impressive software (the most impressive exemplars are almost all in C, Java etc., BTW).

> The assumption of equality between 2 + 2 and 4, which is true in classical math but false in computation

Using Lisp syntax, you are wrongly conflating `(+ 2 2)`, which is equal to `4`, with `(quote (+ 2 2))`, which is obviously different from `(quote 4)`. Obviously, a term rewriting approach to computation involves replacing syntax objects with syntactically different ones, but in a pure language, they will semantically denote the same value.


0. This conflation between object and meta language rôles is an eternal source of confusion and pain in Lisp.

1. Types help clarify the distinction. `(+ 2 2)` has type `integer`, but `(quote (+ 2 2))` has type `abstract-syntax-tree`.

> very impressive software

For its lack of conceptual clarity. And for its bugs. I'm reduced to being a very conservative user of software. I wouldn't dare try any program's most advanced options, for fear of having to deal with complex functionality implemented wrong.

> Using Lisp syntax, you are wrongly conflating `(+ 2 2)`, which is equal to `4`

It is not equal to 4; it computes to 4. Substituting (+ 2 2) for 4 everywhere yields a different computation with a different complexity.

> but in a pure language, they will semantically denote the same value.

The same value means equal in classical math; not in computation. Otherwise (sort '(4 2 3 1)) would be the same as '(1 2 3 4), and if so, what does computation do? We wouldn't need a computer if that were so, and we certainly wouldn't need to power it with so much energy or need to wait long for it to solve the traveling salesman problem.

> For its lack of conceptual clarity. And for its bugs.

That's a very glass-half-empty view. I for one think that IBM's Watson and self-driving cars are quite the achievements. But even beyond algorithmic achievements and looking at systems, software systems that are successfully (and continuously) maintained for at least a decade or two are quite common. I spent about a decade of my career working on defense software, and that just was what we did.

If you can't distinguish object from meta language, I'm afraid we can't have a reasonable discussion about computing. This distinction is crucial. Go get an education.
If you don't understand what I'm saying -- and that could be entirely my fault -- you can just ask. If you (mistakenly) assume that by 2 + 2 I mean the expression "2 + 2" rather than the computation 2 + 2, why not assume that you may have missed something (which is the actual case) rather than assume that I don't understand the basics (which is not)?

Since I don't wish to discuss this topic further with rude people, but I do wish to explain my point to other readers, I'll note that the entire concept of computational complexity, which is probably the most important concept in all of computer science (and is at the very core of computation itself -- there can be no computation without computational complexity), is predicated on the axiom that in computation 2+2 does not equal 4 (in the sense that they are "the same"), but is computed to be 4. If 2+2 were actually 4, there would be no computational complexity (and so no computation).

As a matter of fact, an entire model, or definition of computation (another is the Turing Machine) called lambda calculus is entirely based on the concept that substitution is not equality in the theory of computation, by defining computation to be the process of substitution (which is what lambda calculus calls reductions). If 4 and 2+2 were the same (as they are in classical math), there would be no process, and the lambda calculus would not have been a model of computation but simply a bunch of trivial (classical) mathematical formulas.

Indeed, some people confuse the LC notation with classical mathematical notation (which it resembles), and mistakenly believe that 2+2 equals 4 in LC in the same sense that it does in math (I assume because the same reductions preserve equality in math). This is wrong (in LC reductions do not preserve "sameness" but induce -- or rather, are -- computation). To their defense, LC does make this fundamental distinction easy to miss in hiding 100% of what it is meant to define -- namely, computation -- in operations that classical mathematicians associate with equality[1], and in itself does not have a useful formulation of complexity[2]. Nevertheless, those people might ignore computational complexity, which is the same as ignoring computation itself, and while they may turn out to be great mathematicians, you would not want them specifying or writing your traffic signal or air-traffic control software.

[1]: Although I believe most notations take care to not separate consecutive reductions with the equal sign but with an arrow or a new line, precisely to signify that reduction is not equality. Also, unlike in math, LC reductions are directional, and some substitutions can't be reversed. In this way, LC does directly represent one property of time: it's directionality.

[2]: The challenge complexity poses to LC is great, and only in 2014 was it proven that it is not just a model of computation but one of a "reasonable machine":

Computation is something different. Models like call by push value make this very clear. LC does as well, though, but LC tends to be joined up with an equality semantics which intentionally sweeps computation under the rug for simplicity.

This is a big hairy problem in untyped LC, though, since untyped LC has non-termination and therefore is not confluent. This is what I mean by taking non-termination seriously is one way to force "time" and "computation" back into models. It means that LC has no beta-equivalence the same way that, say, simply typed LC does.

So anyway, you're wrong to say that LC has no notion of complexity—people count reduction steps all the time—but right to say that often this is intentionally ignored to provide simpler value semantics. It's foolish to think of this as equivalent to LC, though.

This paper is interesting. I think what they prove was at least folk belief for a long time, but I've never seen a proof.

> you're wrong to say that LC has no notion of complexity

I didn't say that it has no notion of complexity; I said it "does not have a useful formulation of complexity", as reduction step count are not very useful in measuring algorithmic complexity, at least not the measures of complexity most algorithms are concerned with.

> It's foolish to think of this as equivalent to LC, though.

Oh, I don't think that at all, which is why I specifically said that some people make the mistake of confusing LC reductions with classical substitutions (equality). They may then think that computation can be equational (false), rather than say it may sometimes be useful to think of computation in equational terms, but that's an abstraction -- namely, a useful lie -- that has a cost, i.e. it is "leaky" (true).

Fair enough.
> A much more foundational difference of opinion about purity arises from whether or not you allow termination.

Termination or non-termination? One of the (many) things that annoy me about PFP is the special treatment of non-termination, which is nothing more than unbounded complexity. In particular, I once read a paper by D.A. Turner about Total Functional Programming that neglected to mention that every program ever created in the universe could be turned into a total function by adding 2^64 (or a high enough counter) to every recursive loop without changing an iota of its semantics, therefore termination cannot offer a shred of added valuable information about program behavior. Defining non-termination as an effect -- as in F* or Koka (is that a Microsoft thing?) -- but an hour's-computation as pure is just baffling to me.

> I'd be interested to see a semantics where setting mutable stores is sufficiently ignored by the choice of equality as to be considered a non-effect. I'm not sure what it would look like, though.

I think both transactions and monotonic data (CRDTs), where mutations are idempotent, are a step in that direction.

Non-termination, my bad!

And of course that's true! Trivially so, though, in that we could do the same by picking the counter to be 10 instead of 2^1000, since we don't appear to care about changing the meaning of the program.

If we do, then we have to consider whether we want our equality to distinguish terminating and non-terminating programs. If it does distinguish, then non-terminating ones are impure.

Now, what I think you're really asking for is a blurry edge where we consider equality module "reasonable finite observation" in which something different might arise.

But in this case you need partial information so we're headed right at CRDTs, propagators, LVars, and all that jazz. I'm not for a single second going to state that there aren't interesting semanticses out there.

Although I will say that CRDTs have really nice value semantics with partial information. I think it's a lot nicer than the operational/combining model.

> If we do, then we have to consider whether we want our equality to distinguish terminating and non-terminating programs.

But this is what bugs me. As someone working on algorithms (and does not care as much about semantics and abstractions), the algorithm's correctness is only slightly more important than its complexity. While there are (pragmatic) reasons to care about proving partial correctness more than total correctness (or prioritizing safety over liveness in algorithmists' terms), it seems funny to me to almost completely sweep complexity -- the mother of all effects, and the one at the very core of computation -- under the rug. Speaking about total functions does us no favors: there is zero difference between a program that never terminates, and one that terminates one nanosecond "after" the end of the physical universe. Semantic proof of termination, then, cannot give us any more useful information than no such proof. Just restricting our computational model from TM to total-FP doesn't restrict it in any useful way at all! Moreover, in practical terms, there is also almost no difference (for nearly all programs) between a program that never terminates and one that terminates after a year.

Again, I fully understand that there are pragmatic reasons to do that (concentrate on safety rather than liveness), but pretending that there is a theoretical justification to ignore complexity -- possibly the most important concept in computation -- in the name of "mathematics" (rather than pragmatism) just boggles my mind. The entire notion of purity is the leakiest of all abstractions (hyperbole; there are other abstractions just as leaky or possibly leakier). But we've swayed waaaay off course of this discussion (entirely my fault), and I'm just venting :)

I don't think at all that "value semantics" without any mention of complexity is an end in and of itself. Any sensible programmer will either (a) intentionally decide that performance is minimally important at the moment (and hopefully later benchmark) or (b) concern themselves also with a semantic model which admits a cost model.

Or, to unpack that last statement, simulate the machine instructions.

I'm never one to argue that a single semantic model should rule them all. Things are wonderful then multiple semantic models can be used in tandem.

But while I'd like to argue for the value of cost models, at this point I'd like to also fight for the value-based ones.

Totality is important not because it has a practical effect. I vehemently agree with how you are arguing here to that end.

It's instead important because in formal systems which ignore it you completely lose the notion of time. Inclusion of non-termination and handling for it admits that there is at least one way which we are absolutely unjustified in ignoring the passage of time: if we accidentally write something that literally will never finish.

It is absolutely a shallow way of viewing things. You're absolutely right to say that practical termination is more important that black-and-white non-termination.

But that's why it's brought up. It's a criticism of certain value-based models: you guys can't even talk about termination!

And then it's also brought up because the naive way of adding it to a theorem prover makes your logic degenerate.

> And then it's also brought up because the naive way of adding it to a theorem prover makes your logic degenerate.

Well, I'd argue that disallowing non-termination in your logic doesn't help in the least[1], so you may as well allow it. :) But we already discussed in the past (I think) the equivalence classes of value-based models, and I think we're in general agreement (more or less).

[1]: There are still infinitely many different ways to satisfy the type a -> a (loop once and return x, loop twice, etc. all of them total functions), and allowing (and equating) all of them loses the notion of time just as completely as disallowing just one of them, their limit (I see no justification for assuming a "discontinuity" at the limit).

It's not the type (a -> a) which is troubling, it's the type (forall a . (a -> a) -> a) which requires infinite looping. It's troubling precisely because the first type isn't.
Oh, I see. It's an element in the empty set, which is indeed very troubling for constructive logic. Well, they're both troubling in different ways. Your example is troubling from a pure mathematical soundness perspective, and mine is from the "physical"[1] applicability of the model.

[1]: The relationship between classical math and computation is in some ways like that of math and physics, except that physics requires empirical corroboration, while computation is a kind of a new "physical" math that incorporates time. In either case the result can be the same: the math could be sound but useless. In physics it may contradict observation; in computation it can allow unbounded (even if not infinite) complexity.

It causes trouble for non-constructive logics, too. Any logic with an identity principle will be made inconsistent with the inclusion of `fix : forall a . (a -> a) -> a`.

By yours are you referring to `forall a . a -> a`? I don't see how that principle is troubling at all.

It is troubling in the same way, but more subtly, and it has to do with the interpretation of the logic rather than the logic itself. The problem with (a -> a) -> a is that you can prove any a. Now, this is indeed a problem if you're trying to use types to prove mathematical theorems (one interpretation). But what if you're using types to prove program correctness (second interpretation, this one computational)? Why is it troubling? Well, it's troubling because you may believe you've constructed a program that produces some result of type x, but really you haven't, because somewhere along the way, you've used a (a->a)->a function (or forall a b. a->b). But the thing is that from one interpretation you really have succeeded. Your type is populated, but it is populated with a nonterminating function. Why is that a problem? It's a problem because it may cause me to believe that I have a program that does something, while in reality that program is useless.

Now back to my issue. Suppose that somewhere along the way you rely not on a non-terminating function but on a high-complexity function (e.g. a function that factors integers). You may then believe you've constructed a program, but your program is not only just as useless as the non-terminating one, but useless in the same way. A program that takes 10000 years is much more equivalent to a non-terminating program than to one that completes in one second. Your types are still populated with "false" elements, and so your logic, while now useful for proving mathematical theorems, may still prove "false" programs, in the sense of useless programs.

HOWEVER, what I said has a practical flaw, which still keeps excluding non-termination but allowing high-complexity useful. And that is that it's much easier for human beings to accidentally create programs with infinite complexity, rather than accidentally create programs with a finite, but large complexity. I don't know if we have an answer as to why exactly that is so. It seems that there are many cases of "favored" complexity classes, and why that is so is an open problem. Scott Aaronson lists the following as an open question[1]:

The polynomial/exponential distinction is open to obvious objections: an algorithm that took 1.00000001^n steps would be much faster in practice than an algorithm that took n^10000 steps! But empirically, polynomial-time turned out to correspond to “efficient in practice,” and exponential-time to “inefficient in practice,” so often that complexity theorists became comfortable making the identification... How can we explain the empirical facts on which complexity theory relies: for example, that we rarely see n^10000 or 1.0000001^n algorithms, or that the computational problems humans care about tend to organize themselves into a relatively-small number of equivalence classes?

Nevertheless, it is important to notice that what makes non-termination-exclusion useful in practice is an empirical rather than a mathematical property (at least as far as we know). Which is my main (and constant) point that computation and software are not quite mathematical, but in many ways resemble physics, and so relying on empirical (even cognitive) evidence can be just as useful than relying on math. The two should work in tandem. It is impossible to reason about computation (more precisely, software), with math alone; there are just too many empirical phenomena in computation (and software in particular) for that to make sense. I feel (and that may be a very biased, wrong observation) that the software verification people do just that, while the PLT people (and by that I don't mean someone like Mattias Felleisen, but mostly PFP and type theory people) do not.

How can that look in practice? Well, observing (empirically) that the complexity spectrum is only sparsely populated with programs humans write (and that's true not only for instruction counts but also of IO operations, cache-misses etc.), perhaps we can create an inferrable type system that keeps track of complexity? I know that integer systems with addition only are inferrable, but I'm not sure about multiplication (I don't think so, and I know division certainly isn't). Perhaps we can have a "complexity arithmetics" that is inferrable, and allows "useful rough multiplication" even if not exact multiplication? A Google search came up with some work in that direction: (I only skimmed it).


Most people consider garbage collection to be a net win in terms of simplicity. Have you thought about why? Not every feature interacts with other features in complicated and error prone ways.
I think the politest description I can provide of the experience of tracking down GC bugs is that they interacted with other features in complicated and error prone ways.
But was that code in the GC implementation, or your program? Because if its in the implementation, then that is a different matter. We have to distinguish between simplicity of implementation vs simplicity provided the user. I agree that if it is not implemented correctly, it can be a net loss in simplicity.
It was code in my program.
You mean “ease of use”, not “simplicity”. Simplicity is the lack of (Kolmogorov) complexity.
That's why I said large set. I haven't thought about garbage collection enough to have any insight on it.
I believe that garbage collection is a net win because it allows software to be composed in simple ways when it would otherwise be difficult to compose.

I can pass data from one part of the program to another without coordinating both parts to respect the same memory management convention, and without having to pass that information from one place to another. This makes it easier to compose software, and in particular to reuse software like libraries (that frequently end up as layers between one component and another). For a concrete example, in a Java program I can simply publish an event into a Guava EventBus [1] without worrying where it will end up at the time I write that code. There's no real risk that I'll end up with a memory leak. I can connect two things together that weren't designed to be used together, and I can do it while inserting intermediate layers that transform, copy, record, measure, that data.

Garbage collection significantly reduces the amount of coordination necessary between unrelated parts of the code base, thereby improving code reuse. This is what I would claim is less commonly recognized win, beyond the more commonly recognized wins from eliminating classes of obvious mistakes. EventBus is just one random example that involves plugging things together - the same effect is present all over Java libraries, from logging frameworks to collections to concurrent data structures.


Generics solve an occurrence of too much entanglement. That is, it solves entanglement of an abstract "shape" of computation with a specific set of type definitions. Generics actually allow you to not think about an additional dimension of your program (i.e. the exact types a computation or data type can be used with).

Haskell programmers famously point this out with the observation that a generic fmap is safer than one that has knowledge of the concrete types it uses. The type signature of fmap is this:

fmap :: Functor f => (a -> b) -> f a -> f b

In practice, what this means is that you can be assured that your fmap implementation can only apply the passed function over the value(s) wrapped in the functor, because of the fact that it cannot have visibility into what types it will operate on.

In golang, because of a lack of generics, you can write a well-typed fmap function, but it will inherently be coupled with the type of the slice it maps over. It also means the author of such a function has knowledge of all the properties involved in the argument and return type of the function passed, which means the writer of an fmap can do all kinds of things with that data that you have no assurances over.

Exactly. Parametricity is the killer feature of statically typed functional languages. This why it saddens me when Haskell and OCaml add features that weaken parametricity, like GADTs and type families.
Can you elaborate on your last sentence?
Sorry, for some reason the “reply” link didn't appear below your post until after I had written my reply to Peaker. My reply to you is exactly the same:

How do GADTs or type families weaken parametricity?
Without either GADTs or type families, two types `Foo` and `Bar` with mappings `fw :: Foo -> Bar` and `bw :: Bar -> Foo` that compose in both directions to the identity, are “effectively indistinguishable” from one another in a precise sense. If you have a definition `qux :: T Foo`, for any type function `T` not containing abstract type constructors, you can construct `justAsQuxxy :: T Bar` by applying `fw` and `bw` in the right places.

With either GADTs or type families, this nice property is lost.

This nice property is not part of 'parametricity' as I know it, though.
Are you saying something like "All type constructors are functorial Hask^n x Hask^op^m -> Hask"?
It's something weaker. Consider the groupoid of Haskell types and isomorphisms. Without GADTs and type families, all type constructors of kind `* -> *` are endofunctors on this groupoid.

Note 1: And there are higher-kinded analogues, but I hope you get the idea from this.

Note 2: There are also exceptions, like `IORef` and friends.

However, GADTs and TFs are completely opt-in, so it seems a bit of a stretch to construe this as a generally bad thing. IME it's not as if library authors are arbitrarily (i.e. for no good reason) using GADTs or TFs instead of plain old type parameters in their APIs.
Reflection, downcasts and assigning `null` to pointers are completely opt-in in Java too.

With respect to type families, I'm probably being a little bit unfair. Personally, I don't have much against associated type families. (Although I think Rust handles them much more gracefully than GHC.) But very important libraries in the GHC ecosystem like vector and lens make extensive use of free-floating type families, which I find... ugh... I don't want to get angry.

> Reflection, downcasts and assigning `null` to pointers are completely opt-in in Java too.

No, they're not -- not in the same sense, at least. A GADT/TypeFamily is going to be visible in the API. None of the things you mentioned are visible in the API.

There's a HUGE difference.

> A GADT/TypeFamily is going to be visible in the API.

Only works if you're never going to make abstract types. Which I guess is technically true in Haskell - the most you can do is hide the constructors of a concrete type. But the ability to make abstract types is very useful.

Don't get me wrong, I love Haskell. It's precisely because I love Haskell that I hate it when they add features that make it as hard to reason about as C++. (Yes, there I said it - type families are morally C++ template specialization.)

If a type is abstract then the rest is up to the implementation of functions that operate on the data type -- and that could be hiding all kinds of nastiness like unsafePerformIO and the like. Yet, we usually don't care about that because it's an implementation detail.

Am I missing some way to "abuse" GADTs/TFs to violate the abstraction boundary or something like that? (I seriously can't see what you think the problem is here. I mean, you can equally well abuse unsafeCoerce/unsafePerformIO to do all kinds of weird things to violate parametricity, so I don't see why GADTs/TFs should be singled out.)

Isn't that exactly what Rob Pike is saying with the vector space analogy?
I think that's what he's appealing to, but I have a hard time reconciling that sentiment with many design characteristics of Go. Go's type system for instance... I don't think he fully grasps "what he's trying to solve" by having a static type system in golang, when the language has things like unsafe casting, null pointers, a lack of parametric polymorphism, etc. As a programmer tool, its hugely weakened by these design decisions... there are large classes of properties about code that are simply impossible (or are much more complicated) to encode using types in golang. And yet in their literature on some of these subjects, they make an appeal to simplicity [1]. I think there's a disconnect here between theory and practice.


> Complexity is not about additivity, it's about entanglement.

This. And nothing reflects entanglement better than a formal semantics. English (or any other natural language) always lets you sweep it under the rug. The only objective measure of simplicity is the size of a formal semantics.

I expand on this here:

> The only objective measure of simplicity is the size of a formal semantics.

If we accept that, then simplicity alone is not a desirable goal. Something may well be formally simple but at the same time incompatible with human cognition. Indeed, that may not be objective, but since when do we value things only by objective measures? That the only objective measure of simplicity may be the size of formal semantics does not mean that it is the most useful measure of simplicity (if we wish to view simplicity as possessing a positive value that implies ease of understanding).

>If we accept that, then simplicity alone is not a desirable goal

Or maybe simplicity in terms of the formal semantics is a desirable goal, but not the simplicity of the language alone.

At the end of the day, what determines mental load is the complexity of solving a particular problem using a particular language.

I don't think this simplicity follows from the simplicity of the language itself. There may not even be the slightest correlation.

In general, the simpler the language, the more complex the code to implement the solution in that language, and so the harder it is to understand the code. But the more complex the language, the simpler (and easier to understand) the code, but the language itself is harder to understand. It's almost like you want the language to have the square root of the complexity of the problem.

(This is in general. The big way around this is to pick a language that is well-suited for your particular problem.)

If you want an alternative explanation for simplicity, I'd say simplicity implies flexibility.

Designing a simple implementation of something means that it is as close to the essence of what you've designed it for, and by doing so you've made it more universal, and therefore more flexible/adaptable.

This would work if compilers were written in simple languages, and if target languages themselves were simple. In other words, in a parallel universe.
> If we accept that, then simplicity alone is not a desirable goal.

Agreed. Otherwise, Forth and Scheme would've taken over the world.

> Something may well be formally simple but at the same time incompatible with human cognition.

Do you have a concrete example?

> (if we wish to view simplicity as possessing a positive value that implies ease of understanding).

I don't particularly fetishize simplicity. What I want is the least effort path to writing correct programs. The following features help:

0. Simplicity - smaller formal systems have less room for nasty surprises.

1. Using the right tool for resource management - sometimes it's a garbage collector, sometimes it's substructural types.

2. Typeful programming - it's an invaluable tool for navigating the logical structure of the problem domain.

> Do you have a concrete example?

Off the top of my head, and since we're talking about computation, I'd say SK combinator calculus. Or Church numerals.

> Typeful programming

It is, but it can also be a hindrance. Finding the sweet spot is a matter for empirical study.

> I'd say SK combinator calculus. Or Church numerals.

They're a PITA to use, but not because they're hard to understand.

But for writing actual programs, the complexity of using matters as much as the complexity of understanding.

(I recognize that this doesn't invalidate the point you are trying to make in the parent post. They aren't incompatible with human understanding. They're incompatible with writing programs in a reasonable amount of time, though.)

> > Something may well be formally simple but at the same time incompatible with human cognition.

> Do you have a concrete example?


So I sort of agree with you here, but only as a partial converse:

> If all the formal semantic models for a language are unwieldy then you've probably got a non-simple language.

Now, "simplicity" is a mental construct, a language UX construct. To handle this, I think of "unwieldy" as a bit of a technical term. What does it mean to be unwieldy? It means that there is significant non-ignorable complexity.

Significant here must be defined almost probabilistically, too. If there is significant complexity which is ignorable across 99/100 real-world uses of a language then it really should win some significant points.

Ignorable complexity is also an important concept. It asks you to take empirical complexity measures (you mention Kolmogorov complexity; sure why not?) and temper them against the risk of using a significantly simpler "stand-in" semantic model. I accept that the stand-in model will fail to capture what we care about sometimes, but if it does so with an acceptable risk profile then I, pretty much definitionally, don't care.

Now that I've weakened your idea so much, it's clear how to slip in justifications for really terrible languages. Imagine one with a heinous semantics but a "tolerable" companion model which works "most of the time".

From this the obvious counterpoint is that "most of the time" isn't good enough for (a) large projects (b) tricky problems and (c) long support timelines. Small probabilities grow intolerable with increased exposure.


But after all this, we're at an interesting place because we can now talk about real languages as being things with potentially many formally or informally compatible formal or informal semantic models. We can talk about how complexity arises when too few of these models are sufficiently simple. We can also talk about whether or not any of these models are human-inelligible and measure their complexity against that metric instead of something more alien like raw Kolgomorov complexity.

So here's what I'd like to say:

> Languages which hide intolerable complexity in their semantics behind surface simplicity are probably bad long-term investments.


> Languages which have many "workably compatible" semantic models, each of which being human-intelligible, are vastly easier to use since you can pick and choose your mode of analysis with confidence.


> Value-centric semantic models (those ones with that nasty idea of "purity" or whatever) are really great for reasoning and scale very well.

In particular, I'm personally quite happy to reject the assertion made elsewhere that value-centric semantics are not very human intelligible. On the other hand

> Simple operational semantic models are also pretty easy to understand

I just fear that they scale less well.

> Now, "simplicity" is a mental construct, a language UX construct.

My take on “simplicity” is very computational. To me, a programming language is a system of rules of inference, whose judgments are of the form “program is well-formed” (which covers syntax and type checking) and “program does this at runtime” (a reduction relation, a predicate transformer semantics, or whatever fits your language's dynamics best). Then, simplicity is just some measure of the language's size as a collection of rules of inference. Also:

0. Undecidable rules of inference (e.g., type reconstruction for a Curry-style System F-omega) are considered cheating. Undefined behavior (e.g., C and C++) is also considered cheating. Cheating is penalized by considering the entire language infinitely complex.

1. Languages (e.g., ML's module system) are allowed to be defined by elaboration into other languages (e.g., System F-omega). Elaboration into a language that cheats is considered cheating, though.

> To handle this, I think of "unwieldy" as a bit of a technical term. What does it mean to be unwieldy? It means that there is significant non-ignorable complexity.

I don't see any complexity as ignorable at all. I just see some complexity as worth the price - but you, the programmer, need to be aware that you're paying a price. For instance, the ease with which one can reason about Haskell programs (without the totally crazy GHC extensions) justifies the increased complexity w.r.t., say, Scheme.

> Significant here must be defined almost probabilistically, too. If there is significant complexity which is ignorable across 99/100 real-world uses of a language then it really should win some significant points.

This is ease of use, which is subject to statistical analysis; not simplicity, which is not.

I don't want to deny that those "quantitative" measures exist. I want to cast doubt that they're the dominant mechanism for modeling how real people think when they're accomplishing a task in a formal system.
> nothing reflects entanglement better than a formal semantics

A formal semantics is just a way to translate from one formalism to another.

It's rather obvious that choosing the target formalism determines how simple the language will appear, when you talk about "formal semantics" you should specify "which one": operational? denotational? axiomatic?

Stricly speaking a compiler or an interpreter represents a formal semantics for a language: operational semanthics rules are often very very similar to the code of an AST interpreter, for example.

One could interpreter your statement to mean that the smaller the compiler the simpler the language, which means that assembly language was the simplest language all along!

For example, in your reddit post you claim that := is problematic, and indeed its semantics is tricky and often trips beginners (and even experienced!) programmers. However := semantics is not actually that complicated "define every variable that isn't defined inside the current scope, otherwise assign them" and the errors stem from the fact that people assume that the scope lookup for := is recursive, which would arguably result in a more complicated formal semantics.

> A formal semantics is just a way to translate from one formalism to another.

Of course, we need to reach a gentleman's agreement regarding which formalism is a good “foundation” for defining everything else. My personal preference would be to define all other formal systems in terms of rules of inference.

> It's rather obvious that choosing the target formalism determines how simple the language will appear, when you talk about "formal semantics" you should specify "which one": operational? denotational? axiomatic?

I am fine with any, as long as the same choice is made for all languages being compared. What ultimately interests me is proving a type safety theorem, that is, a precise sense in which “well typed programs don't go wrong”, so perhaps this makes a structural operational semantics more appropriate than the other choices.

> Stricly speaking a compiler or an interpreter represents a formal semantics for a language: operational semanthics rules are often very very similar to the code of an AST interpreter, for example.

> One could interpreter your statement to mean that the smaller the compiler the simpler the language, which means that assembly language was the simplest language all along!

Sure, but the target languages used by most compilers are often themselves very complex. Which means a realistic compiler or interpreter most likely won't be a good benchmark for semantic simplicity.

>Of course, we need to reach a gentleman's agreement regarding which formalism is a good “foundation” for defining everything else. My personal preference would be to define all other formal systems in terms of rules of inference.

If you are interested in defining "low cognitive load" that's a poor choice, in my opinion.

>I am fine with any, as long as the same choice is made for all languages being compared. What ultimately interests me is proving a type safety theorem, that is, a precise sense in which “well typed programs don't go wrong”, so perhaps this makes a structural operational semantics more appropriate than the other choices.

I'm not aware of any such thing, the kinds of formal semantics that academics prefer deal very poorly with the realities of finite execution speed and memory, the kinds that pratictioners use (which usually isn't referred to as "formal semantics" but rather "what does this compile to") deal very poorly output correctness.

However this has little to do with cognitive load, even if such formal semantics existed it doesn't necessarily mean it would be easy for a human mind.

> Sure, but the target languages used by most compilers are often themselves very complex. Which means a realistic compiler or interpreter most likely won't be a good benchmark for semantic simplicity.

If you agree that formal semantics is just a translation from one formalism to another, you can't claim that a formalism A is semantically more complex than formalism B without picking a formalism C as a reference point.

> If you are interested in defining "low cognitive load" that's a poor choice, in my opinion.

I'm interested in “low cognitive load without sacrificing technical precision.” It's a much harder goal to achieve than “low cognitive load if we hand-wave the tricky details.”

> However this has little to do with cognitive load, even if such formal semantics existed it doesn't necessarily mean it would be easy for a human mind.

Which is exactly my point. I only consider a language simple if its formal description is simple.

> If you agree that formal semantics is just a translation from one formalism to another, you can't claim that a formalism A is semantically more complex than formalism B without picking a formalism C as a reference point.

No disagreement here. I even stated my personal choice of C.

> I'm interested in “low cognitive load without sacrificing technical precision.”

You don't seem to be interested in low cognitive load at all, otherwise:

> No disagreement here. I even stated my personal choice of C.

you would have attempted to motivated your choice of reference point in terms of cognitive load. Even if induction mathematics was the way the human mind worked (which it isn't) it's very different from CPUs and there is a cognitive load (and semantical distance) in going from mathematics to CPUs.

> Even if induction mathematics was the way the human mind worked (which it isn't)

Even if it isn't how the human mind works, it's how computing itself works. Would you take seriously a physicist who denies gravity? I wouldn't take seriously a computer scientist who denies structural induction.

> it's how computing itself works

but it's not the whole story when it comes to computers.

> For example, in your reddit post you claim that := is problematic, and indeed its semantics is tricky and often trips beginners (and even experienced!) programmers. However := semantics is not actually that complicated "define every variable that isn't defined inside the current scope, otherwise assign them" and the errors stem from the fact that people assume that the scope lookup for := is recursive, which would arguably result in a more complicated formal semantics.

Clearer examples of unnecessary complexity in Go would be the function-scoped nature of "defer" (implicit mutable state is much more complicated than block scoping) and the inconsistent behavior of "nil" with the built-in collections (reading from a nil map returns zero values, but reading from a nil slice panics).

Nov 10, 2015 · 2 points, 0 comments · submitted by colinprince
> Programming without pointer indirection seems like cycling without legs

A study of functional programming will demonstrate this untrue. The paragraph you quoted from the paper elaborates to specifically why references are complicated and low level: "introducing the concept of reference ... immediately gives rise in a high level language to one of the most notorious confusions of machine code, namely that between an address and its contents ... They cannot be input as data, and they cannot be output as results. If either data or references to data have to be stored on files or backing stores, the problems are immense". Perhaps one reason why people love working in JSON so much is because it only encodes values.

> indeed high level languages often move the other way, abandoning value types altogether

FP languages strongly emphasize programming with values. Rich Hickey, creator of Clojure programming language, gave an amazing talk "Simple made Easy" which is probably the best place to start to dive into this:

FP languages are almost exclusively pointer heavy; without it they could not do structure sharing which it what allows persistent data structures with efficient operations.

FP languages also rely heavily on partial pattern matching, type classes with vtable-style indirection and even GC for cycle collection. Closures in FP languages are boxed, too, almost without exception.

In Haskell, even integers are boxed by default. You don't observe many of the problems of references due to their immutability, but this isn't to say they're not there. The "value-heavy" language closest to FP I know of is Rust, and many functional idioms are plain irritating to use because of it.

Maybe Clojure is different, but I'd be surprised. Perhaps you were in disagreement about the use of the word "value" in "value type", which I meant in the D or Rust sense of a stack-allocated, indirection-free type.

"even integers are boxed by default"

Um... why? For example, 2 is.. 2. 2 is not 3. If I "box" 2, can I then make it 3?

Some very old FORTRAN implementations actually allowed this:

subroutine x(j)


j = 3


do 1 i = 1,2

1 x(4)



(sorry... it's been years). Note that the reference is immutable (j refers to a single location) -- but the value is boxed (4 is put into a memory location). And this is why this can even work.


Your code is quite hard to read, especially as I don't know Fortran. Can I have it with indentation (indent each line 2+ spaces to make a code block)?


Integers are boxed because Haskell's semantics almost exclusively deal with boxed types (eg. you can't pass unboxed types to most functions). The optimizer might specialize some functions for boxed types, but this is a transparent optimization and does not affect semantics.

You're talking about implementation now. The text you quoted said "references' introduction into high level languages", not "references' use in the implementation of high level languages". The quote was about languages' conceptual models, not their underlying implementation forced by a particular type of CPU that code written in the language happens to be running on. A language can present value semantics while doing structural sharing using references underneath, as Clojure's persistent data types do.
Aug 22, 2015 · frou_dh on Gopher Tricks

    "map of int to string"
     map   [int]   string
    "map of state to map of int to state"
     map   [state]   map   [int]   state
In Rich Hickey terminology, it seems people reject that it is simple (non-interwoven) because it does not strike them as easy (familiar / close to hand).

( ...Any excuse to link to this excellent presentation: )

I think you mix the meanings of simple and easy here. Simplicity is an absolute metric and describes the number of dependencies a thing has, while ease is a relative metric describing your understanding of said thing.

For example, a singleton is easy to learn and easy to use, but since every function using it adds a hidden dependency it quickly grows in complexity to the point its impossible to reason about it without forgetting something.

On the other hand, a Promise is simple as it depends on nothing but a producer and a consumer, no matter how much you compose them. Yet I've seen many experienced developers struggle to learn how to use them as they're not easy to understand at first.

This is somewhat related to meta ignorance. From my own experience I've seen a tendency in novice programmers to stick with things which are both easy to learn and use. Their projects go well initially but they grow less and less productive over time as complexity creeps in from the composition of all these easy to use things.

I've always said experience in our industry is knowing what not to use in order to stay productive in the long run.

Here's a link to Rich Hickey explaining it in depth:

Speaking of meta, I absolutely loathe how the basic distinction between simplicity and ease of use has since become a meme so persistently associated with Rich Hickey. There is nothing I can really do about it, but it nonetheless annoys me to no end.
I myself learned it from Rich in the very talk I linked to a few years ago and I'm the first to admit I didn't make that distinction beforehand. I've met more developers unaware of the distinction than otherwise, which is why I'm curious as to why you think it has become a meme?

Also note that english isn't my first language (I'm french Canadian) and even here in french the distinction is seldom made.

In colloquial English, no.

The distinction between two main types of simplicity, those of parsimony and elegance, has been a long-standing philosophical topic [1].

In engineering, the so-called KISS principle (first coined as such in the early 20th century) has always had the implication of minimalism and implementation simplicity, in contrast to mere ease of use.

Fred Brooks wrote a famous paper in 1986 [2] perfectly describing the differences between accidental and essential complexity, and of the semantics of complexity management in software projects.

Hickey has said absolutely nothing spectacular, but his name comes up every time from the typing fingers of the historically illiterate whenever simplicity and ease of use are brought up.



I dunno, people use the same kind of argument to say that nobody's really done anything new in philosophy since Kant or even Aristotle. The KISS principle is not the same as a distinction between simplicity and ease. Accidental vs. essential complexity is orthogonal to simplicity vs. ease. And parsimony and elegance are both about simplicity rather than ease. Some people can be a little bit too historically literate for their own good.
Thanks for the precision, definitely puts it all into perspective!

I knew about KISS, but almost every time I hear someone mention it they think about ease not simplicity. I will also definitely check out Brooks' paper.

While I understand your position, I believe a lot of this has been lost to the new generations of engineers and what Rich did is remind them of it.

"historically illiterate" are pretty strong words. Actually everyone is historically illiterate by these standards because the ideas that any one person is familiar with is a vanishingly small percentage of all the ideas the human race has ever had. Furthermore, the origins of ideas are impossible to trace with any great precision. Is the most famous person the person with the best ideas? Was the person with access to the printing press the person with the best ideas? Frankly it strikes me as a form of intellectual hipsterism to be bothered so much by this.

Rich Hickey gained fame for this because he stated an idea very clearly and compellingly, this is non-trivial and should not be so flippantly dismissed as just recyling old ideas—all your ideas are recycled too.

"Rich Hickey gained fame for this because he stated an idea very clearly and compellingly, this is non-trivial and should not be so flippantly dismissed as just recyling old ideas."

+100 for this.

Alan Kay has similarly criticized the computer software industry and its "pop culture."

Aug 11, 2015 · dvanduzer on XMPP Myths
> The common theme was seeing complexity and, especially, abstraction as a universal good rather than something with real costs

Rich Hickey did a great service outlining some common problems when thinking about the word complexity itself:

I don't think it's about the engineers wanting to see complexity, so much as the problems you mention stemming from design-by-committee.

Jul 08, 2015 · mattjaynes on Datomic Best Practices
I have a client that is exploring Datomic, so I wonder if some of you can chime in on why this is popular at the moment and what your experiences are with it?

I'm a big Rich Hickey fan. If you don't know who he is, he's the guy behind Clojure and Datomic. I don't use those tools, but his views on simplicity are wonderful.

Here's a great quote of his on the subject:

"Simplicity is hard work. But, there's a huge payoff. The person who has a genuinely simpler system - a system made out of genuinely simple parts, is going to be able to affect the greatest change with the least work. He's going to kick your ass. He's gonna spend more time simplifying things up front and in the long haul he's gonna wipe the plate with you because he'll have that ability to change things when you're struggling to push elephants around."

Here's his classic talk on simplicity if you haven't seen it yet:

Datomic doesn't seem to have had a huge amount of marketing: it's been spreading largely by word of mouth, so a slow build-up makes sense.

It does bring an exceptionally elegant design (well worth reading Nikita Prokopov's "Unofficial guide" if you're curious). Also, the time and transaction-annotation features are unmatched AFAICT -- if you're working with complex data where provenance matters, Datomic can save a HUGE amount of work building tracking systems.

I was very interested, but pretty disappointed that Datomic is completely closed source. Maybe this is a little mean, but what could be more "simple" than being able to read, understand, and modify the database you rely on?

Neo4j, though marketed differently, is a similar approach (but the Community version is GPLv3 and Enterprise is AGPLv3). The Cypher query language is declarative in a similar way to Datomic - the biggest missing feature is transactions.

For sure, I would have played around with it, if it was open source and free to some small number of clients. But with so many FOSS databases, why use Datomic?
Rich Hickey has been criticized for that repeatedly. When asked, he's been transparent that Datomic is closed source so that he can put his kids through college. He also points out that he already gave us the whole Clojure language open source.

It's hard for me not to sympathize with him on this.

We're using datomic in production. It's had its ups and downs. For one, having raw data available at in-memory speeds really changes the level of expressiveness you have in your code; you no longer are constrained to packing every question about your data into a giant query and sending it off - you can instead pull data naturally and as needed. Many of our queries make multiple queries and are high performance.

The licensing is a huge pain in the ass. If I accidentally launch an extra peer over our license limit, our production environment will stop working until the extra peer comes down. This is really butting heads with the growing popularity of abstracting physical servers as clusters so I think the strategy is kind of a mistake on cognitect's behalf.

Part of me wonders why they don't open source datomic and crank up the marketing effort on the consultancy and datomic/clojure/etc support portion of the business. It seems like a much more effective model for DB companies. For direct revenue streams, they can always have tuned/monitored clusters packaged as appliances.
Datomic is probably getting more attention on HN in the wake of David Nolen's EuroClojure talk about Om Next (
I just can't get enough Hickey talks. The guy put on clear words things I always feel.
I can't help but feel the quote ultimately embodies a false belief. Simplicity doesn't build you a rocket that can get to the outer solar system. Understanding and experimentation does.

Sure, this was probably built up using simple experiments and designs. But consider the Mar's landing[1]. Simplicity would be to have a single mechanism for landing the Curiosity. Not 3. With one of them being a crane drop from a hovering rocket!?

I do feel there is an argument to up front simplicity. However, as systems grow, expect that the simplicity will be harder and harder to maintain and keep such requirements as performance met. To the point that it becomes a genuine tradeoff that has your standard cost/benefit analysis.

In the end, this falls to the trap of examples. If you are allowed to remove all assumptions from real use down to only a simple problem, you can get a simple solution. Add back in the realities of the problem, and the solution can get complex again. It is a shame that, in studies, so few real programs are actually looked at.


> Simplicity would be to have a single mechanism for landing the Curiosity. Not 3. With one of them being a crane drop from a hovering rocket!?

Why? Simple, in the way Rich Hickey advocates, means the opposite of complex, which means that things are woven together. You can have many landing strategies without them being tightly coupled together. A huge system isn't necessarily complex.

That is the catch, all three landing strategies were coupled together. You couldn't do one without the one before it. More, previous steps had to take into account the baggage (literal) that was necessary to perform later steps.
I thought you were speaking about different strategies, but in this case you're describing three different stages of an overall landing strategy. That doesn't sound complex.
If that's the best they could do and what got the job done, good. It's as simple as was possible and necessary. What exactly does this prove against simplicity, again?
The difference between "simple" and "as simple as possible" is the crux.

Mainly, the problem is that these speeches all talk about keeping things simple. In many problems, this can't be done. Understanding the simple helps. But the actual solution will not be simple. So any newspeak to get around that is just annoying.

Why not?
See my above post. As simple as possible is a far cry from simple. That is all I am saying.

I extend that into saying that people that can understand complicated things, as well, will have an advantage.

A simple system can solve complicated things. When Rich Hickey talks about simple, he is referring to tight coupling, "death by specificity" and hard to understand concurrency. Having a system that does multiple things, isn't necessarily a complicated system. A Mars landing, which in itself is a difficult (though not necessarily complex) problem, can be solved by a simple system. An example of this is Unix. A simple system that does complicated things.
You should watch the talk(s), as your analysis here is entirely missing the context. What you’re talking about is what Rich Hickey and Stu Halloway call “complicated”, which is different from what they call “complex”.
I've seen them. They are nice and very alluring. So are a lot of false things. :) And I should note that I am mainly asserting this as false so that I can further explore the idea.

The idea to generate a new word that is hard to blur from existing ones and depends entirely on context is amusing in this context.

That is, what separates complicated from complex is one of context. Yet... contexts change. And often the first thing you do when building a solution to a problem is to reduce the problem to something easier to solve.

In this angle, I fully agree. Simplify your problem as much as you can. But do not be misled into thinking you can keep it simplified. As you add in more and more of the realities of the problem, they will reflect in the solution. And, often, the worst thing you can do is to try and cling to the "simple" solution that solved a different problem.

That is, understand the simple things well. See how they map onto the complicated things. Don't cling to the idea that they can be merely composed into the complicated solution. Often, several simple solutions can be subsumed by a more complicated one. Much in the same way that higher math can subsume lower maths.

I love datomic. It's a relational, ACID, transactional, non-SQL database.

The upsides:

SQL is a horrible language, yet all other noSQL DB also throw away the relational, transactional and ACID features that are great in postgres. Postgres with datalog syntax would basically be a win by itself. Datomic queries are data, not strings. Queries can be composed without string munging, and with clear understanding of what that will do to the query planner.

The schema has built-in support for has-one, has-many relationships, so there's no need for join tables.

I've never met a SQL query planner that didn't get in the way at some point. If needed, you can bypass the query planner, and get raw access to the data, and write your own query.

You can run an instance of it in-memory, which is fantastic for unit tests, so you don't have Postgres in production, but SQLite when testing.

The downsides:

It's closed source.

Operationally, it's unique. Because it uses immutable data everywhere, its indexing strategy is different. I don't have the experience of what it will do under high load.

The schema is 'weaker' than say, postgres. While you can specify "this column is type Int", you don't have the full power of Postgres constraints, so you can't declare 'column foo is required on all entities of this type', or "if foo is present, bar must not be present", etc. It should be possible to add that using a transactor library, but I don't think anyone has done serious work in that direction yet.

Compound indexing support isn't in the main DB yet. I had to write my own library:

Definitely agree re: datalog/pull syntax for SQL backends. Quite surprised it hasn't happened yet.
If you are using python code for serving static files, probably you are not seeing lots of traffic yet, I guess you should reconsider your decision and watch "Simple made Easy"[1] talk by Rich Hickey


Looks like Whitenoise can gzip your assets, add a hash to the filename, serve them with far-future headers, and then selectively serve the gzipped version based on Accept-Encoding headers.

Put that behind Cloudflare and your origin server is only hit when an edge location is warming its cache.

Sounds Hickey-tier simple to me, especially compared to your advice of "just use and configure Nginx".

I completely agree it takes discipline and experience to write clean code in any language.

What I'm saying is that it takes more discipline to cleanly use Java or C++ than it does to use Haskell or Clojure. For the simple reason that most of the abstractions provided by the former languages add to the program's complexity rather than remove it.

There's an excellent explanation by Rich Hickey in Simple Made Easy:

May 29, 2015 · jacobolus on UDP and me
If you haven’t seen them, I recommend the Clojure guys’ talks about the subject of simplicity. They reached into the etymological history of the word “simple” to pull out its early definition, which is quite precise and IMO tremendously useful in this context, unlike the confused muddle of modern definitions.

Rich Hickey, “Simple Made Easy”:

Stu Halloway, “Simplicity Ain’t Easy”:

I too would encourage people seeing the parent comment to definitely watch those videos. The Rich Hickey talk especially has shaped a lot of my thinking in the last few years.
Both watch Rich Hickey's excellent presentation on the matter and establish whether you agree on the definitions.

A question I'm asking myself more often as I get older: What is the value of changing somebody's mind?

To that end, rather than prove someone else's code is complex, we can emphasize the virtues of simplicity with what we do. Refactoring someone else's code in smaller increments would be the passive aggressive middle ground.

There are more opportunities with code that hasn't been written yet. Maybe suggest watching this lecture as a group and then just discussing it without any additional agenda:

Having worked a couple decades in the trade, occasionally with some very unstable people, I have seen one suicide. I doubt it had anything to do directly with work but it's been a reminder to be nice to people, even when they are wrong.
May 04, 2015 · 1 points, 0 comments · submitted by duggan
> I'd caution against referring to all such explorations as complexity. Complexity is a highly overloaded term in our field.

The difference between complex & hard, easy & simple has been put very elegantly by Rich Hickey in Simple Made Easy [1]. That doesn't mean everyone agrees with his definitions, which is why he revives the word "complected" to mean objective interleaving of concepts, and pulls out "hard" from the way people use complex to mean something one is unfamiliar with. I like his definitions, so I use them. :)

> Sometimes it refers to the number of steps a given algorithm takes to compute

This can still create ambiguity since it could be either time or memory complexity, but still easy to infer, especially if there's a big O.

> depth and breadth of a program's syntax tree

Lisp overloads the parens for difference concepts, which is complex. This could also be hard if one's not familiar with the syntax.

> tendency to branch out and create cycles

Sounds like time complexity!

> Sometimes it's mistakenly used to refer to concepts which are in reality simple but merely unfamiliar or non-intuitive.

This is the ambiguity, is he saying Haskell complex because it has a lot of interleaving with it's concepts, that other languages do not? Or is it just unfamiliar? I would think it's simpler because it forces one to think about how time interleaves the program, which could make things harder! I'm guessing this is what the grand parent means, since ML is impure. Though, either case is empty without examples.


Yeah, I've seen that presentation. Rich's ideas were what I had in mind when I wrote my reply.

In general, use of highly overloaded words is ambiguous in these discussions.

There are some things about software that are objective, such as simplicity. Rich Hickey talks a lot about this.

Simplicity is never simple

And is most often ruined by the real world and its exceptions

Exactly. This is why I like Rich Hickey's Simple Made Easy [1] so much. Basically with easy constructs it becomes harder to build simple systems, even though the simple constructs are harder to learn.


yep, I love that talk and I find myself pointing people towards it all the time :)
I love this talk, and Rich Hickey's talks in general, but I think this goes beyond that.

At one point you want full control of the HW, much like you did with game consoles..

On the other you want security: This model must work in a sandboxed (os, process, vm, threads, sharing etc.) environment, along with security checks (oldest one that I remember was making sure vertex index buffers given to the driver/api must not reference invalid memory, something you would make sure is not the case for a console game through tests, but something that the driver/os/etc. must enforce and stop in a non-console game world - PC/OSX/Linux/etc.)

From little I've read on this API, it seems like security is in the hands of the developer, and there doesn't seem to be much OS protection, so most likely I'm missing something... but whatever protection is to be added, definitely would've not been needed in the console world.

Just a rant, I'm not a graphics programmer so it's easy to rant on topics you just scratched the surface...


(Not sure why I can't reply to jeremiep below), but thanks for the insight. I was only familiar with one I posted above (and that was back in 1999, back then if my memory serves me well, drawing primitives on Windows NT was slower than 95, because NT had to check all index buffers whether they were not referencing out-of-bounds, while nothing like this was on 98).

GPUs these days have MMUs and have address spaces allocated per context. It's implemented internally to the driver though so you don't see it. And it's normally mapped differently, but the point of AMD's HSA stuff is to make the CPU's and GPU's MMU match up.
(To anwser the lack of a reply button:)

This is just hn adding a delay until the reply link appears related to how deeply nested the comment is. The deeper the longer the delay. It's a simple but effective way to prevent flame wars and the likes.

Security is actually much easier to implement on the GPU than on the CPU. For the simple reason that GPU code has to be pure in order to get this degree of parallelism. A shader is nothing more than a transform applied to inputs (attributes, uniforms and varyings) in order to give outputs (colors, depth, stencil).

Invalid data would simply cause a GPU task to fail while the other tasks happily continue to be executed. Since they are pure and don't interact with one another there is no need for process isolation or virtualization.

Basically, its easy to sandbox a GPU when the only data it contains are values (no pointers) and pure functions (no shared memory). Even with the simplified model the driver still everything it needs to enforce security.

You are describing GPU from 1990s. Modern GPU is essentially a general purpose computer sitting on the PCIe bus and able to do anything the CPU can. It does not have to run pure functions (e.g. see how it can be used for normal graphics tasks in [1]) and can write any location in the memory it can see. Securing it is as easy/hard as securing a CPU: if you screw up and expose some memory to the GPU it can be owned just like the memory exposed to a CPU task[2].



> I don't think anyone would say that .... Clojure is simple language, or that simplicity is a core goal for it.

Good god you are so wrong.

Watch yourself some of Rich Hickey's trove of excellent presentations, including the one where he breaks down the detailed etymology of the word "simple" and how much he strives for that.

Feb 11, 2015 · mercer on The Duct Tape Programmer
Seems like a good context to recommend the wonderful 'Simple Made Easy' talk by Rich Hickey, the creator of Clojure.

I cannot help but think that the overwhelming desire to support immutability and functional constructs here, as well as in nearly all other modern languages, gives significant evidence that functional programming is finally winning out over OOP.

In the future, I hope that FP will be the default design choice, with objects being used where needed such as for components, plug-ins, and ad-hoc dictionary-passing-style tools.

After all, simplicity is the most important property of any software system -

> I cannot help but think that the overwhelming desire to support immutability and functional constructs here, as well as in nearly all other modern languages, gives significant evidence that functional programming is finally winning out over OOP.

You're making an either/or distinction here without any reason. You could just as well say, "The number of cars that recently added anti-lock brakes gives significant evidence that ABS is winning out over seat belts."

I don't see these languages removing any OOP features, so I think what it shows is that functional features are either useful independent of OOP features, or complement them. (My personal belief is the latter: the languages I enjoy the most have both.)

BTW, I must admit I misspoke on the last sentence - obviously the property of a software system working and doing what the user needs is more important than simplicity.

Too short a road from the obvious to the assumed...

Immutability was never incompatible with OOP, just the opposite in fact. Even Alan Kay often criticized languages like C++ and Java for encouraging the use of setters and, thus, “turning objects back into data structures”.

C# is still one of my favorite languages (even though I use F# most of the time now), but I do admire Java for making it significantly more painful to write mutable rather than immutable classes; it's too bad that fact was lost on so many programmers.

Kudos for sharing the Rich Hickey video; it's one of my favorites of all time.

> but I do admire Java for making it significantly more painful to write mutable rather than immutable classes;

Out of curiosity, how does it do that? As far as I know, everything in Java is mutable by default.

You have to go through the extra ceremony of writing a setter.
The same applies to C# though, correct? Plus, I was thinking more of the lines of something like:

    class Foo {
      private int x = 0;

      public void bar() {
        this.x += 1; // Whoops!

    Foo x = new Foo();; // Mutating call.
Which Java does not prevent.
Yes, the same applies to my beloved C#, but that language was much less hostile to immutability. Indeed, the prettier mutator syntax was even positioned as a feature once upon a time.

To be clear, I'm the guy that insists on defining classes as either abstract or sealed, and almost always marks fields as readonly. But, I'm okay with the kind bounded mutability that you mentioned; clients of a `Foo` instance have to treat it as immutable.

Here is how I do OOP:

* I make classes to hide state, and hidden state is the same as being stateless.

* As I learn more about the problem, I start subdividing classes into smaller classes (not necessarily via inheritance).

* So, as my understanding of the problem increases, the number of class division increases, and by the pigeonhole principle, the amount of state approaches zero.

Very interesting related talk about complecting things - "Simple Made Easy" - by Rich Hickey, inventor of Clojure:

If you're a Ruby person, maybe watch this version instead, since it's almost the same talk but for a Rails Conference, with a few references:

It might do that wildly inefficient thing...

Or, you might do something where you have a list of pointers, and you point at a different value instead of mutating an existing value.

I haven't dug into the details of how immutable data structures can be made to work efficiently, but part of the charm is that in many cases you don't mutate the array at all. What I mean is, there are certain behaviors around mutation that programmers do because they can.

When you take away the ability to mutate data, you design differently and without side effects. All of a sudden testing becomes easier, faster, cheaper for large parts of your codebase. You have simpler solutions that are potentially easier to reason about because the complex (and sometimes elegant) solutions aren't so readily available.

A few talks that are around this style of thinking:

Boundaries are good, values are good, simple things that work together are good. The more we can take the good parts and form them together into a cohesive language/framework/platform, the better our software will be.

Oct 28, 2014 · gooseus on Meteor hits 1.0
What I'm hearing is that Meteor doesn't play well with others and that you should make the decision to go with Meteor carefully since changing your mind later will require a ground-up refactor.

This is pretty much my experience as someone who started working on a project where the lead dev had decided to use Meteor and then quit leaving a wonky prototype with "reactive data", poor performance and missing functionality.

Now, some would say "it's not Meteors fault the UI wasn't made well!" and then I'd reply "sure, but if Meteor didn't encourage (and it seems, require) tight coupling of the data access and presentation layers, then maybe we wouldn't have spent the last 3 weeks rebuilding the entire app from the ground up just to add some missing functionality and fix UI bugs".

Honestly, I really can't figure out the lack criticism I see of Meteor around here. All these comments to congratulate on an arbitrary step in version number? I see other articles of accomplishment with a fraction of the positive encouragement and many times the criticisms. Is there a silent majority, or did I spend the last few months being underwhelmed by Meteor because I'm missing something?

Meteor embodies, for me, a tool that makes things 'easy', rather than one that makes things 'simple'.

Anyways, that's just one developers experience and opinion, take if for whatever you feel it's worth.

Meteor doesn't require tight coupling between data access and presentation layers. Personally I use meteor with react.
A lot of criticism of "new shiny tech" gets downvoted/flagged on HN so people don't even bother anymore, while another useless library in Go/Javascript gets pushed to the top of the front page.
Meteor does make things easier, by making things simpler.

It is much simpler dealing with Meteor's API's then working with documentation from 3 or 4 different frameworks that you need to accomplish the same kind of stuff Meteor does.

Meteor gives you a set of clean coherent APIS to work with to get stuff done.

Whenever the topic of "simplicity" in software comes up, I feel obligated to point to the superb "Simple Made Easy" talk:

From my personal perspective, I do not see Go achieving the kind of simplicity Rich talks so eloquently about. Instead, Go seems much more like an "easy" language.

An example of easy versus simple in the OP's article is pointing to on boarding: Sure, your on boarding of new engineers may be /easier/ because Go is an ostensibly "simple" (they actually mean small) and familiar syntax. But that does not imply any correlation with writing simple software. I would argue the difficulty of writing abstractions in Go (especially around channels) actually tends to yield the opposite!

Much like ORMs are a trap because they seem simple, so too are technologies which have such a specious quality of simplicity. It is important to establish how a given technology actually achieves simplicity in practice and I do not see how this article argues that successfully--that is not to say Go cannot achieve simplicity, but merely that this article does not seem to make a solid case, in my opinion.

> Much like ORMs are a trap

If that is the case, the same can be said with JavaScript, Rails, Ruby yes? (all of them looked simple yet you can screw up really bad, like awfully bad, like worse than Java complexity bad).

I use ORM to do simple-to-medium complex queries enough to avoid N+1.

My ORM also have tools around it to help me generate DDL from code as part of my build (of course one still have to ensure the generated DDL is correct with proper relationship and constraints and all that jazz, but my point stands).

My ORM gives me the ability to write in either JPQL and SQL to do certain tasks like deleting a bunch of rows based on conditions. Those are handy enough.

My ORM also helps me prevent against SQL injection attack too.

How are these abilities are "traps" for me just as much as the C++ complexity are traps?

I'd rather deploy go than clojure, but I don't know if the go authors achieved what Socrates and Rich Hickey had in mind.
I agree with you. There is a difference between simple and simplistic. Easy is not always simple.
I think most people assume that when people say Go is simple, they mean easy. I think it's exactly the opposite. Go is simple, but it's not always easy. It's like the difference between building a house using pre-fab walls, and building a house using studs and nails. Which one is easier? Probably pre-fab walls. Which one is simpler? Probably studs & nails. You don't need a crane to put the walls in place, you can do it with just a hammer and 1-2 guys. It might take a little longer, but you'll have exactly the house you want.

Your simple/easy comparison with an ORM is a very valid one, I think. ORMs seem easy, but they're not simple, and often times their easyness at the outset causes complexity once you have to do anything that goes off the rails they've laid out for you.

But I think Go is the opposite of an ORM. There's very little magic, nothing gets done "for you". The code does what you tell it to do, no more, no less. Which means people reading the code can immediately tell what it does - it does what it says it does in plain terms.

Sep 18, 2014 · 3 points, 1 comments · submitted by ashish01
immutable goodness
Interesting question.

One nice feature is that markdown makes text annotations explicit and obvious. There's no hidden styling. Empty lines don't have a font size. Its obvious when a bolded region doesn't bold the spaces between words. In the Rich Hickey[1] sense, markdown is much simpler than rich text editing because all you have to worry about is the semantics of your text (this is a heading) and not how its actually styled.

Weirdly, its kind of a huge throwback to LaTeX. Thinking of markdown as a "modern, simplified LaTeX for the web" seriously hits the mark.


Agreed. I would summarize that as: it doesn't have hidden state. Which is nice.
> It is interesting how purity has a very strong allure - maybe our brains are naturally drawn to a reduced state of complexity, and thus energy consumption?

Or maybe complicated more often than not is just not a "carefully balanced mix of grey" but more of a clusterfuck .. and we learned to be wary of it.

Have a look at this:

I've seen most of his talks, actually I'm a fan. I wouldn't consider Clojure to be a good example of purity though - it has both LISP purists as well as FP purists (Haskell) against it. Actually it is quite pragmatic for running on the JVM and even has optional typing.

If elegance and simplicity are achievable without making too many sacrifices, great! I'd choose Clojure over C++ any day.

This "manifesto", for lack of a better word, neatly exhibits the main problem I have with so many efforts to "improve programming" of this style: they focus on ease of learning as the be-all and end-all of usability. Coupled with the unfortunate rhetoric¹, it left me with a negative impression even though I probably agree with most of their principles!

Probably the largest disconnect is that while I heartily endorse simplicity and fighting complexity—even if it increases costs elsewhere in the system—I worry that we do not have the same definition of "simplicity". Rich Hickey's "Simple Made Easy"² talk lays out a great framework for thinking about this. I fear that they really mean "easy" and not "simple" and, for all that I agree with their goals, that is not the way we should accomplish them.

How "easy" something is—and how easy it is to learn—is a relative measure. It depends on the person, their way of thinking, their background... Simplicity, on the other hand, is a property of the system itself. The two are not always the same: it's quite possible for something simple to still be difficult to learn.

The problem is that (greatly simplifying) you learn something once, but you use it continuously. It's important for a tool to be simple and expressive even if that makes it harder to learn at first, since it will mostly be used by people who have already learned it! We should not cripple tools, or make them more complex, in an effort to make them easier to learn, but that's exactly what many people seem to advocate! (Not in those words, of course.)

So yes, incidental complexity is a problem. It needs addressing. But it's all too easy to mistake "different" for "difficult" and "difficult" for "complex". In trying to eliminate incidental complexity, we have to be careful to maintain actual simplicity and not introduce complexity in other places just to make life easier for beginners.

At the same time, we have to remember that while incidental complexity is a problem, it isn't "the" problem. (Is there every really one problem?) Expressiveness, flexibility and power are all important... even if they make things harder to learn. Even performance still matters, although I agree it's over-prioritized 99% of the time.

Focusing solely on making things "easy" is not the way forward.

¹ Perhaps it's supposed to be amusingly over the top, but for me it just sets off my internal salesman alarm. It feels like they're trying to guilt me into something instead of presenting a logical case. Politics rather than reason.


You think that Edwards doesn't know the difference between simple and easy to learn/use?

> It's important for a tool to be simple and expressive even if that makes it harder to learn at first, since it will mostly be used by people who have already learned it!

Why is that important? Why can't a tool be simple, expressive, easy to learn and easy to use? What studies do you site for your viewpoint? There has been a lot of research in this area. Please reference the research that supports your claim.

Reason has been tried by Edwards and many other for decades. It hasn't worked.

"Why can't a tool be simple, expressive, easy to learn and easy to use? What studies do you site for your viewpoint?"

Perhaps it can be. But they are all design choices that are often at odds with one another. E.g. I've frequently used software that was easy to learn but hard to use.

Likewise I've used tools that were hard to learn because they had new abstractions but once you understood the new abstractions they were really easy to use. Etc etc etc.

> ...they focus on ease of learning as the be-all and end-all of usability.

I see people jump to this conclusion on pretty much every post of this type. In this case it is clear from the authors work ( that his focus is not on making programming familiar/easy to non-technical users but rather on having the computer help manage cognitively expensive tasks such as navigating nested conditionals or keeping various representations of the same state in sync.

> learn something once, but you use it continuously.

Empirically speaking, the vast majority of people do not learn to program at all. In our research we have interviewed a number of people in highly skilled jobs who would benefit hugely from basic automation skills but can't spare the years of training necessary to get there with current tools. There does come a point where the finiteness of human life has to come into the simple vs easy tradeoff.

You also assume that the tradeoff is currently tight. I believe, based on the research I've posted elsewhere in this discussion and on the months of reading we've done for our work, that there is still plenty of space to make things both simpler and easier. I've talked about this before -

I explicitly advocate crippling tools and making them more complex if it results in them being easier to learn.

The cost of a barrier to entry is multiplied by everyone it keeps out who could have been productive / creative / or found their passion.

The cost of a limited set of tool features is, arguably, that people will exhaust the tool and be limited. However I have never found this argument convincing given what was achieved with 64kb of memory, or even paper and pencil.

The typewriter, the polaroid camera, the word processor, email. All are increases in complexity and massive decreases in effort to learn and they all resulted in massive increases in the production of culture and exchange of ideas. Some inventions are both easier to learn and less complex (Feynman diagrams) but if I had to pick one, I pick easy to learn, every single time.

>> It's important for a tool to be simple and expressive even if that makes it harder to learn at first, since it will mostly be used by people who have already learned it!

Not sure if I agree. Steep learning curves significantly hurt user adoption. This is especially true for tools that have lots of alternatives.

I've observed a definite correlation that people who like Hicky's simple/easy framework don't agree with mine. Personally I don't find it useful because it tries to separate knowing from doing.

I also seem to disagree with people who emphasize "expressiveness, flexibility, and power". I think they are mostly a selection effect: talented programmers tend to be attracted to those features, especially when they are young and haven't yet been burned by them too often.

With such fundamental differences we can probably only agree to disagree.

"Personally I don't find it useful because it tries to separate knowing from doing."

What do you mean? Learning and doing are quite different.

From a professional programmer point of view: If it takes me 6 months to learn a tool, and then the tool allows me to complete future work twice as fast (or with half as many defects etc) that is a great trade off.

Rather than just agreeing to disagree, you could defend your beliefs with the best arguments and examples you have. You're opposed to expressiveness, flexibility and power? That's a somewhat surprising view. I'm interested in why.
I don't think this manifesto and the simple/easy framework are even really talking about the same things beyond the basic point around avoidance of incidental complexity. I think both viewpoints outline worthy goals with staggeringly different levels of scope. In the case of the manifesto there's hardly anything actionable beyond doing lots of mostly messy research. I think people find this frustrating, but so what? Lofty goals often arise out of the conviction theres far too much momentum in the wrong direction. In contrast I think the simple/easy framework is something a working programming can apply to everyday tasks and while unlikely to result in a radical shift it may perhaps bring some of us closer to seeing that even larger goals may be possible.
Nice to see another post addressing the biggest issue in Software Engineering head-on.

This of course is nothing new - it's something Alan Kay has been telling us for more than 3 decades [1], who also has an enlightening talk addressing the biggest problem facing software engineering [2].

Before vanishing from the Internet node's Ryan Dahl left a poetic piece on how "utterly fucked the whole thing is" [3].

Steve Yegge also has dedicated one of his epic blog posts to "Code's worst enemy" [4].

More recently Clojure's Rich Hickey has taken the helm on the issue producing his quintessential "Simple Made Easy" [5] presentation, explaining the key differences between something that is "Easy", to something that is truly "Simple".






I should have said "more modular" but I definitely don't mean that modularity comes for free in FP languages. Programmers are capable of writing rigid programs in any language but I do feel in my little experience of using FP languages it is harder to do so, or more obvious when you are doing so. I'll give it a try anyway.

I think the modularity comes from most FP languages having fewer building blocks to work with than most OO languages. It's the same reason why users of OO languages with a ton of different building blocks (Java, C#, etc.) find more "minimalist" OO languages like Ruby refreshing. FP languages tend to take this simplicity even further. You essentially have just functions and modules (a place to group related functions). FP languages also usually don't have state, unless you want to emulate that in your program somehow.

To me it is about ditching the OO way of creating some representation of the circle of life or Kingdom of Classes hierarchy in your applications for just treating your program as data that goes through a sequence of transformations. Linear programs are always easier for me to understand than hierarchies.

Rich Hickey's Simple Made Easy[0] talk is a great overview of the subject. Now his talk isn't about modularity per se, but I think modularity is one of the many things that fall out of simplicity.

0 -

I think also the real complaint people have against mainstream OO languages, particularly Java, aren't around "inflexibility" but rather around total ecosystem complexity.

For instance, in PHP there are JSON serialization and de-serialization tools built into the language and people just use those.

In Java on the other hand you have to pick a third-party library, find it in maven central, cut and paste it into the POM file which is a gawdawful mess because it is all cut-and-pasted so every edit involves a tab war so it hard to view the diffs, etc.

Then you find out that the other guys working on the system already imported five different JSON libraries, but worse than that, some of the sub-projects depend on different versions of the same JSON libraries which occasionally causes strange failures to happen at run-time, etc...

Ironically these problems are caused by the success of the Java ecosystem. When you've got access to hundreds of thousands of well-packaged software that is (generally) worth reusuing, you can get in a lot more trouble than you can in the dialogue of FORTH you invented yourself.

This is a great point. Just look at the logging situation in Java.
Aug 08, 2014 · 4 points, 0 comments · submitted by vvijay03
I think it's simple in the "simple made easy" kind of way that Rich Hickey has spoken about [1]. I think something can be deep, refined, and simple. I'd also say that those are my favorite concepts. I guess it's what I think of when I use the word 'elegant'.

It's really clean and straight-forward to use, but the simple components provide a lot of flexibility and power, while being easy to teach someone.

I use trello for just about everything, and I would have dropped it a long time ago if it took more than five minutes to show someone how to use the fundamental concepts. I can get them up and running in no time, and the users (even non technical users) tend to find all the interesting bits on their own as they go.


"Trello at its core is just a list of lists. Very simple concept."

So it's a Lisp!

Jul 23, 2014 · chipsy on Norris numbers
I disagree. Simple things are _necessarily_ dense in their implementation because they're so exacting. Recall Simple Made Easy[0].


Jul 10, 2014 · munro on When REST Gets Messy
It's hard for me to submit to a philosophy for reasons like that it's beautiful, and that you will reach zen. Level 3 enlightenment sounds very cultish to me. :) I've dropped the notion of REST and been very happy with simple RPC, instead of contorting my mental model into resources or to align with the HTTP spec.

I personally have found zen in applying simpler concepts to software development. Such as composition over inheritance to my API design, mixing in certain aspects like content negotiation or caching, when those complexities become necessary. Or separation of concerns, making sure endpoints don't do too much, and the realization of concerns vs technology [1]. Really thinking about the notion of simplicity as describe by Rick Hickley in Simple Made Easy [2]. Or "There are only two hard problems in Computer Science: cache invalidation and naming things"--putting off caching until an endpoint becomes a problem--and not worrying if my URL structure is RESTful.

Here's an example of an API that I find beautiful [3].

[1] [2] [3]

What you mean to say is that Lisp is simple but not easy, but that's true of a lot of things.

You might enjoy this:

fwiw, clojure supports polymorphism.

I encountered this recently but I'll try to give a (bad) explanation.

You'll call `function(thing, arg1, arg2, arg3);`. Another function will run on the arguments and return a dispatch value. For example, it will check what `thing` is and return `struct`. The `struct` version of that function is then run on the args and gives you your value.

In this way you can define several `close` functions based on dispatch value instead of one monolith nested if/else `close`.

I'm on the opposite side of the fence, OOP has never ever appealed to me. "Why would anyone want to use this crazy mess?" kind of thing. I'm sure I can learn to appreciate it with time but it's not a native paradigm to my mind.

I only have experience with lisps when it comes to functional programming but the reason I enjoy it is that it's simple. You have functions and you have data. Functions transform data to more data and the two aren't tightly bound. If you pass a function a value it will always return the same result if you pass it the same value.

THE clojure video: (I don't think he even says the word clojure in the entire hour presentation.)

Jun 30, 2014 · frou_dh on Why Go Is Not Good
> I wish developers would stop equating "complicated" to things "I don't understand".

Rich Hickey's presentation on this topic should be required viewing for everyone:

It's easy cause it's familiar. But it's not simple.

This excellent talk by Rich Hickey explains the difference

I like many of the author's points. Pragmatism, thinking instead of blindly following principles, pushing back against size as a metric for measuring responsibility. I think Robert Martin's work absolutely deserves examination and critique. However, I don't share the author's definitions of simple and complex.

Stating that "binding business rules to persistence is asking for trouble" is flatly wrong. Au contraire, It's the simplest thing to do, and in most cases any other solution is just adding complexity without justification.

I don't feel that increasing the class count necessarily increases complexity, nor do I feel that putting several things into one class reduces it. A dozen components with simple interactions is a simpler system than a single component with which clients have a complex relationship. My views align more closely with those expressed [1] by Rich Hickey in Simple Made Easy.

Classes as namespaces for pure functions can be structured in any way; they don't have any tangible affect on complexity. "Coupling" is irrelevant if the classes are all just namespaces for pure functions. I also find that most data can be plain old data objects with no hidden state and no attached behavior. If most of your code base is pure functions and plain data, the amount of complexity will be fairly small. As for the rest, I think that the author's example of maximizing cohesion and the SRP are functionally identical. They both recommend splitting up classes based on responsibility, spatial, temporal coupling, or whatever other metric you want to use. Personally I prefer reducing the mingling of state, but I think they're many roads to the same place. Gary Bernhardt's talk Boundaries[2] covers this pretty well.



I too identify strongly with Rich Hickey's view on this. That's not to say Uncle Bob is wrong, but I don't think he is as clear a communicator. I see Uncle Bob as having a lot of wisdom that he is able to apply based on his experience but which becomes very hand-wavy when he tries to explain it.
UB happens to be flatly wrong. UB says that docucomments re-stating what the simple function does are excessive and bad. This is totally wrong when one looks at the generated documentation, but UB doesn't seem to use the documentation much. He seems to be one of the people who prefer digging through the code, even if presented with sensible API documentation.
I understand he's a polarizing figure and is overly prescriptive of things that are a matter of style, but his stance on documentation doesn't seem germane here.
I'll add this to your links:

He talks about his definition of "simple" (by digging into what the original English definition was) and what that means for code.

Unfortunately here, Rails encourages putting each class into a separate file, so you have 10 classes spread over 10 files, which does increase complexity.

I dislike having a class/module per file.

Why do you say it increases complexity?

If I'm in extreme mode I take the view that each file should be a single screen. That means a tangible reduction in the complexity of working on them (no more scrolling - each class is just in its own tab).

This can be solved with standard IDEs. Putting two modules or classes into a single file pretty much guarantees a level of coupling. This does not reduce complexity.
By that definition, I could just as easily argue that requiring different files for every class reduces cohesion. The idea that class definitions and file definitions are in any way related is a leaky abstraction.
I've never been a fan of the class-file coupling. It pulls me out of the mental model I'm trying to build in my head and forces me to think about file organization which is almost always inconsistent with the language semantics I'm dealing with.

I've used IDE's that make this more or less painful, but none that actually solved it. If anyone has any suggestions on one that does, I'd be interested to try it out. I don't really care what language. I can pick up enough to see what it feels like.

I also want to say that rich hickey talked about a file as a unit of code not being very good, but I don't recall where, or if he really said it. I want to say it was in a Datomic podcast right around when details about it were coming out.

I think it's this podcast, where Rich Hickey explains codeq:

That is standard practice in many languages.
In Django (the closest thing Python has to Rails) the convention is to put all your models in one file. I also prefer it this way.
Interesting. In CommonJS modules, a file can only export one thing. You could namespace multiple things into one exported object, though I find that granular dependencies can lead to insights about how reusable your modules really are.
Having worked with both, there's a trade-off. Given that in Django you're (mostly) explicitly importing classes and modules rather than autoloading, it's handy to have them all in one place. OTOH, when your project grows, you end up with enormous model files (especially if you follow the fat models/thin views pattern). So you then have to split them into different apps, so fragmentation slips in eventually anyway. (In a rails project, unless you're bolting on engines and such, all your models are at least in one folder).

Where I definitely do prefer Django in this regard is that models declare their data fields, rather than them being in a completely different part of the source as in AR (not Mongoid, I now realise). Do I remember the exact spelling I gave to every column when I migrated them months ago? No. It's good to be able to see it all in one place rather than having an extra tab to cycle through. I don't see any practical benefit from decoupling here.

Especially since the Rails way is not "decoupling" in any real sense. Splitting tightly coupled code into multiple files != decoupling.

I also like that in Django, you declare the fields on the models first and then create the db migrations from them, rather than writing a db migration first to determine what fields the models have.

Indeed, decoupling is probably the wrong word here: I haven't seen an ORM implementation that was not tightly coupled to the database layer, which in the end is surely the point of an ORM - to represent stuff from the database in application code. (I know some people consider this a bad abstraction, but whatever.)

South/1.7 migrations is definitely the best way of the two to manage that coupling. Rails's charms lie elsewhere.

Right, and the debate raging in the Rails community now is whether your business logic should be in your models at all, or whether it should be extracted into plain old ruby objects, separating your domain model from your data model. Reason being, the OOP purists see it as a violation of the Single Responsibility Principle--an object should only have one reason to change, and the models are tightly coupled to the database schema so they have to change if the schema changes, plus you need to start up a database just to test their business logic, if you put business logic in them.

Meanwhile a lot of the practically minded developers like DHH just accept that their objects will be tightly coupled to the database and just deal with it, claiming that anything else would be adding unnecessary layers of indirection.

I am pretty new to Django, but I get the impression that it's not so hard to just not put your business logic in, and put it in separate classes of plain old python objects instead. Maybe that's why I haven't heard about this debate playing out in the Django community the way it is in the RoR community...

If you haven't seen it before, check out Rich Hickey's talk on the topic:
Thanks! Never had seen it, really interesting.
You're off for a retreat if this is the first time you are seeing this talk!
And in case you missed any of the others, this is a great list:

Everything in life a tradeoff. You should watch this video:

The parens are annoying, until:

a) You build that fully composable library that you always wished you could have written in X language, but it neeeeever quite worked the way you wanted.

b) You realize that by keeping your data immutable, it allows you to write less tests, be more confident in your code, and you stop worrying "is that value is what I think it is?"

c) By building on top of the JVM, you are able to use java interop to save yourself a day of coding a custom library for something that exists and is well tested.

d) Deployment becomes a breeze because you just export a jar/war file and load it up into any of the existing app servers.

e) You get phenomenal speed increases for "free" if you're coming from dynamic languages like ruby/python/PHP

f) When you need to dip into async code, you can write your async code, in a synchronous fashion, which (for me) is much easier to think about then keeping track of callbacks in my head.

Good luck, if you decide to give it a shot, I think you might realize the parens isn't such a big deal in the long run!

That's almost the Clojure motto...

See Rich Hickey's talk "Simple Made Easy" (

That's very true and I think that it's related to the topic of Rich Hickey's talk Simple Made Easy [1].

Maybe what we need is to study the economics of software and come up with a system in which market outcome is promotion of good libraries. I think that the social/economic dynamics of software development play a huge role in building a successful product, both free and commercial. Has anyone studied the subject in greater detail?


Apr 05, 2014 · nnq on Amazon Dash
> Ultimately we shouldn't assume consumers value convergence

Yep, indeed. And the frustrating part is that they choose "easy" over "simple" and end up drowning themselves in "complexity". And they go like "I have so many devices already, and I've already went through the pain of learning to use them, I'm not going to bother to learn the mobile app you talk about too, even if it you say it can replace them all and save me money, it's jut too much for my brain, this I already know, go away!". Big win for the sellers of these devices that are first to get to the market. Amazon will win big with these!

The interesting people is how can we educate consumers to value what we call "convergence", because their current way of thinking hurts both themselves (they end up spending more and being too "overloaded" to be capable to make the best shopping decisions, or the other extreme, having access only to "curated slices of the market" with the same consequences) and to the tech sector as a whole (yeah, more devices mean more innovation at start, but since convergence will happen anyway at a point, all we end up is reinventing wheels and generating tons of needless complexity that we drown ourselves in...).

(for a definition of how I use 'simple', 'easy' and 'complex' refer to - it's about programming but I think the metaphors also apply to UI/X)

Mar 25, 2014 · bad_user on Why I like Java
The "worse is better" argument is in the context of Unix and C and cannot be separated from that context, otherwise it is meaningless.

And a lot of thought went into Unix, as evidenced by its longetivity and long lasting tradition of its phylosophy. To date it's the oldest family of operating systems and at the same time, the most popular. Anybody that thinks the "worse" in the "worse is better" argument is about not carrying, is in for a surprise:

Even in the original comparisson to CLOS/Lisp Machines outlined by Richard Gabriel, he mentions this important difference (versus the MIT/Stanford style): It is slightly better to be simple than correct.

But again, simplicity is not about not carrying about design or the implementation and in fact the "worse is better" approach strongly emphasises on readable/understandable implementations. And simplicity is actually freaking hard to achieve, because simplicity doesn't refer to "easy", being the opposite of entanglement/interwiving:

"Worse is better" can easily be separated from that context, though I would admit that most people do it incorrectly.

"Worse is better" is, ultimately, an argument against perfectionism. Many of the features of Unix could have been implemented in a "better" way, and these ways were known to people working at the time. But it turns out that those "better" options are much more difficult to implement, harder to get right and are ultimately counter-productive to the goal of delivering software that works. We can set up clear, logical arguments as to why doing things the Unix way is worse than doing things another way (e.g. how Lisp Machines would do it), but it turns out that the Unix approach is just more effective. Basically, although we can invent aesthetic or philosophical standards of correctness for programs, actually trying to follow these in the real world is dangerous (beyond a certain point, anyway).

I think that's pretty similar to the OP's argument that, whilst Haskell is clearly a superior language to Java in many respects, writing code properly in Haskell is much harder than doing so in Java because, probably for entirely cultural reasons, a programmer working with Haskell feels a greater need to write the "correct" program rather than the one that just works. Java gives the programmer an excuse to abandon perfectionism, producing code that is "worse" but an outcome that is "better".

I think I know what you're getting at, which is that a comparison between Unix and the monstrous IDE-generated Java bloatware described in the OP is insulting to Unix. On this you are correct. But for "worse is better" to be meaningful, there still has to be some recognition that, yes, Unix really is worse than the ideal. Unix isn't the best thing that could ever possibly exist, it's just the best thing that the people at the time could build, and nobody has ever come up with a better alternative.

I think Worse is Better can be used by either side. You seem to be on the "Worse" side, ie. the UNIX/C/Java side, and claim the moral of WIB to be that perfect is the enemy of good. That's a perfectly fair argument.

However, on the "Better" side, ie. the LISP/Haskell side, the moral of WIB is that time-to-market is hugely important. It's not that the "Better" side was bogged-down in philosophical nuance and was chasing an unattainable perfectionism; it's that their solutions took a bit longer to implement. For example, according to Wikipedia C came out in '72 and Scheme came out in '75. Scheme is clearly influenced by philosophy and perfectionism, but it's also a solid language with clear goals.

The problem is that Scheme and C were both trying to solve the 'decent high-level language' problem, but since C came out first, fewer people cared about Scheme when it eventually came out. In the mean time they'd moved on to tackling the 'null pointer dereference in C problem', the 'buffer overflow in C' problem, the 'unterminated strings in C' problem, and so on. Even though Scheme doesn't have these problems, it also doesn't solve them "in C", so it was too difficult to switch to.

Of course, this is a massive simplification and there have been many other high level languages before and since, but it illustrates the other side of the argument: if your system solves a problem, people will work around far more crappiness than you might think.

More modern examples are Web apps (especially in the early days), Flash, Silverlight, etc. and possibly the Web itself.

My understanding was that C did not have tremendous adoption by '75.
> The problem is that Scheme and C were both trying to solve the 'decent high-level language' problem, but since C came out first, fewer people cared about Scheme when it eventually came out. In the mean time they'd moved on to tackling the 'null pointer dereference in C problem', the 'buffer overflow in C' problem, the 'unterminated strings in C' problem, and so on. Even though Scheme doesn't have these problems, it also doesn't solve them "in C", so it was too difficult to switch to.

C is quite odd in that the programmer is expected to pay dearly for their mistakes, rather than be protected from them. BTW it wouldn't be as much fun if they were protected.

Regarding Scheme, it has withstood the test of nearly forty years very well.

C is unique because it's really easy to mentally compile C code into assembler. Scheme is more "magical".

The more I learn about assembler, the more I appreciate how C deals with dirty work like calling conventions, register allocation, and computing struct member offsets, while still giving you control of the machine.

On the other hand, some processor primitives like carry bits are annoyingly absent from the C language.

I do not agree. "Worse is better" emphasizes on simplicity - and as example, the emphasis on separation of concerns by building components that do one thing and do it well. It's actually easier to design monolithic systems, than it is to build independent components that are interconnected. Unix itself suffered because at places it made compromises to its philosophy - it's a good thing that Plan9 exists, with some of the concepts ending in Unix anyway (e.g. the procfs comes from Plan9). And again, simplicity is not the same thing as easiness.

> Haskell is clearly a superior language to Java in many respects, writing code properly in Haskell is much harder than doing so in Java

I do not agree on your assessment. Haskell is harder to write because ALL the concepts involved are extremely unfamiliar to everybody. Java is learned in school. Java is everywhere. Developers are exposed to Java or Java-like languages.

OOP and class-based design, including all the design patterns in the gang of four, seem easy to you or to most people, because we've been exposed to them ever since we started to learn programming.

Haskell is also great, but it is not clearly superior to Java. That's another point I disagree on, the jury is still out on that one - as language choice is important, but it's less important than everything else combined (libraries, tools, ecosystem and so on).

These are some notes on Rich Hickey's amazing simple made easy presentation.

I've desperately been needing something to link to when trying