HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
"Stop Writing Dead Programs" by Jack Rusher (Strange Loop 2022)

Strange Loop Conference · Youtube · 255 HN points · 8 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Strange Loop Conference's video ""Stop Writing Dead Programs" by Jack Rusher (Strange Loop 2022)".
Youtube Summary
Most new programming languages are accidentally designed to be backwards compatible with punchcards. This talk argues that it would be better to focus on building new live programming environments that can help us solve the problems of the future.

Talk transcript and links: https://jackrusher.com/strange-loop-2022/

Jack Rusher
Applied Science Studio
@jackrusher

Jack Rusher's long career as a computer scientist includes time at Bell Labs/AT&T Research and a number of successful startups. His current work focuses on the deep relationship between art and technology.

-------- Sponsored by: --------

Stream is the # 1 Chat API for custom messaging apps. Activate your free 30-day trial to explore Stream Chat. https://gstrm.io/tsl
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Oh my...

Does anyone remember Cobol? Pascal? How about Prolog (the 5th gen language)? Standard ML? and now Rust.

Compilers are a dying technology. Stop Writing Dead Programs" by Jack Rusher (Strange Loop 2022) ( https://www.youtube.com/watch?v=8Ab3ArE8W3s&ab_channel=Stran... )

Meanwhile, the hardware world is moving to field programmable gate arrays (FPGAs), Neurocomputing, in-memory logic, dynamic soft-core processors, analog memory, etc.

What does Rust do with memory that self-modifies, or has no defined logic values, processors that dynamically add or remove instructions, etc.

What does Rust do with programs that self-modify?

How long will it be until C++ (2090?) adds borrow checking and Result handling?

What's so "Secure" about Rust? Does it handle row-hammer attacks? DMA overloads? Heartbleed?

Rust is reaching "peak hype".

Sigh.

To quote "Stop Writing Dead Programs" [1]: "If what you care about is systems that are highly fault tolerant, you should be using something like Erlang over something like Haskell because the facilities Erlang provides are more likely to give you working programs."

[1] https://www.youtube.com/watch?v=8Ab3ArE8W3s

forgotpwd16
Cloud Haskell brings ideas from Erlang to Haskell. But don't know how they compare.
throwaway1492
That quote is absurd because the vast majority of applications on the planet are not written in Erlang and work just fine. Working and fault tolerance are in no way related. Being generous the majority of applications with very high uptime are also not written in Erlang.
feoren
> the vast majority of applications on the planet are not written in Erlang and work just fine

The vast majority of applications on the planet are bug-ridden, fragile, over-budget, under-thought, mark-missing, user-hostile garbage. They most certainly do not "work just fine."

throwawaymaths
And also the vast majority of "working" applications have a full devops team, legions of highly paid senior developers, etc. "You" do not.
fallat
Is this actually true? I feel like the majority that software exists is non-business software simply because there's zero cost...
throwaway1492
Ok so we’re adding “developer productivity” on to the list. Aside from moving the goal, one does not have faith a lot faith in the knowledge of people making these claims, like where’s the proof?

Hint read Joe Armstrong’s dissertation.

fulafel
I think this would be relevant only if most applications were written in Haskerll:

> should be using something like Erlang over something like Haskell

Working programs and fault tolerance are related in that a "working" necessary property of a useful fault tolerant program. If it's hard to create a working program it also hinders crating fault tolerant programs.

dmitriid
This quote isn't about "vast majority of software".

It's about tools we use and waste our time on.

raffraffraff
This. I came across this quote:

> If somebody came to me and wanted to pay me a lot of money to build a large scale message handling system that really had to be up all the time, could never afford to go down for years at a time, I would unhesitatingly choose Erlang to build it in

RabbitMQ might be the most famous example of a product written in Erlang. It's great, but I've seen it fall over. In my experience cluster failures typically caused by a handful of root causes like hardware error, resource exhaustion, network partitioning or operator error. Whether a system is built in Erlang or Go, I'd imagine that these same root causes would exist.

I'd love to read in-depth why RabbitMQ's Erlang underpinnings make it better than, say, ActiveMQ or Kafka. Assuming a 3 perfectly built clusters that aren't mishandled, will RabbitMQ somehow "win" over the other two because of some particular greatness in the Erlang?

Jack Rusher mentioned the Glamorous Toolkit in his talk "Stop Writing Dead Programs" https://www.youtube.com/watch?v=8Ab3ArE8W3s

  And here we have the Glamorous toolkit. This is Tudor Gîrba and feenk's thing. They embrace this philosophy completely. They have built an enormous suite of visualizations that allow you to find out things about your program while it's running. We should all take inspiration from this. This is an ancient tradition, and they have kind of taken this old thing of Smalltalkers and Lispers building their own tools as they go to understand their own codebases, and they have sort of pushed it – they've pushed the pedal all the way to the floor, and they're rushing forward into the future and we should follow them.
Are you talking about this? https://youtu.be/8Ab3ArE8W3s?t=2190

It's not inserting images into the code. It's a structured editor presenting an alternative view of the code. It's just code. That's all there is to this. Man, frustrating.

I was inspired to post this by watching the following talk: "Stop Writing Dead Programs" by Jack Rusher (Strange Loop 2022) https://youtu.be/8Ab3ArE8W3s
Oct 18, 2022 · 208 points, 230 comments · submitted by grzm
1letterunixname
Yep. Erlang/BEAM PIDs, processes, supervision trees, supervisor strategies, supervisors are underrated. They should be built into the OS. Honestly, every "program" should be a function, return structured data, and callable from anywhere. Logs also should be structured rather than deserialized lines. Lines requires parsing. Parsing is a waste of electricity and time because it discards boundaries between values. The current paradigm is unwise because it's slow and inefficient unless you actually need lines. Rarely do you need just lines. Look at how nasty bash, grep, awk, sed, cut, head, tail, tr scripts will become.

Also, debugging and profiling, I still can't believe many programming environments / IDEs lack the features and debugging integration that ancient Borland Pascal and MSVC had.

trasz
Logging is a good example of how the opposite to what you say tends to be true. Have you ever wondered why Windows logs are so useless? It’s not windows-specific; when you look at journald you’ll see plenty of structured junk; the actually useful parts are plaintext.

“Every program as a function” would be a disaster for reliability and security. There’s a reason no mature operating system does that, apart from tiny embedded ones.

koyanisqatsi
OP is right. Functional programming is more secure and reliable than imperative programming. There are no buffer overflows when code is formally specified and verified. It's next to impossible to do this for imperative code but very easy to do for functional code [0].

0: https://www.microsoft.com/en-us/research/project/project-eve...

theamk
I think you want to choose a better example. There are no buffer overflows in Java or Python either, and they are pretty imperative.
koyanisqatsi
Are you sure about that?
1letterunixname
Good grief, strawman... Windows is a pile of UX horrors and uncomposibility.

Mandatory capabilities like seL4 combined with MAC tags like SELinux.

Duh.

koyanisqatsi
I'd settle for software that could reasonably save and restore its own state. Half the time when I close the laptop lid I have no idea what will happen when I open it again. More often than not some application and its state just disappears into the computational ether.

One of Alan Kay's slogans is "The computer revolution hasn't happened yet" and he's mostly right. What we have are glorified calculators and media players instead of programmable devices that can augment intelligence and it's because the main runtimes/operating systems on these devices are not dynamic enough. Great deal of effort is required to extend and modify them to fit personal use cases. If you're not a programmer then you might as well just give up because the technical barrier is too high. So it's not surprising that most people have a negative view of personal computers and would rather let Apple and friends manage things for them even if that means giving up a great deal of control and privacy to a 3rd party which is mostly interested in making as much money as possible.

The larger implications of all this is that a great deal of potentially innovative use cases are not feasible because the required effort is too high. Every innovative application has to essentially re-invent its own dynamic runtime and shoehorn it into the existing non-dynamic setup.

1letterunixname
macOS AND iOS flavor these fairly well with the defaults systems and prefs, but software developers have to understand how to save and restore state properly, and not fight it.

Also, I think complete process state (secured by a kernel decryption key) should be able to be saved and restored. Open I/O file descriptors would probably drop if they represent remote resources, but code should be made resilient enough to reconnect and retry in the event of errors.

dang
I've changed the top URL to the video, in the hope of reducing the tedious complaints about webpage formatting that the transcript was generating.

The transcript is here: https://jackrusher.com/strange-loop-2022/. Please let's talk about the ideas now.

kragen
I'm glad I happened across the item before you made the change! The transcript is the best transcript I've ever seen of a talk.

I think the ideas are very interesting. I don't agree with his condemnation of Docker and single-threaded programming, but he's certainly right about the value of being able to kill threads in Erlang, and about the importance of being able to fix things that are broken, and about our computers cosplaying as PDP-11s (and the consequent teletype fetishism).

I hadn't made the connection between Sussman's propagators and VisiCalc before. I mean I don't think Bricklin and Frankston were exposed to Sussman, were they? They were business students? But if not, it's certainly a hell of a coincidence.

My defense of single-threaded code and aborting is that the simplest way we've found so far to write highly concurrent systems is with transactions. A transaction executes and makes some changes to mutable state based on, ideally, a snapshot of mutable state, and if it has any error in the middle, none of those changes happen. So it executes from beginning to end, starting from a blank (internal) state, and runs through to termination, unless halted by a failure, just like the "dead programs" Rusher is complaining about. You put a lot of these transactions together, executing concurrently, and you have a magnificent live system, and one that's much easier to reason about than RCU stuff or even Erlang stuff. This is what the STM he praises in Clojure is doing, and it's also how OLTP systems have been built for half a century. Its biggest problem is that it has a huge impedance mismatch with the rest of the software world.

I've said before that to get anything done you need some Dijkstra and some Alan Kay. If you don't have any Dijkstra in you, you'll thrash around making changes that empirically seem to work, your progress will be slow, and your code will be too buggy to use for anything crucial. If you don't have any Alan Kay in you, you'll never actually put any code into a computer, and so you won't get anything done either except to prove theorems. Alan Kay always had a fair bit of Dijkstra in him, and Dijkstra had some Kay in him in his early years before he gave up programming.

Ideologically, Rusher is way over on the Kay end of the spectrum, but he may not be aware of the degree to which the inner Dijkstra he developed in his keypunch days allows him to get away with that. The number of programmers who are ridiculously unproductive with Forth (i.e., almost all of us) is some kind of evidence of that.

Interestingly he doesn't talk about observability at all, and I suspect that observability may be a more useful kind of liveness for today's systems than setting breakpoints and inspecting variables, even with Clouseau.

Data Rabbit, Maria.cloud, Hazel, livelits, and Clerk sound really interesting.

I think it's unfortunate that you switched the URL; even for people without hearing impairment, transcripts are far preferable to videos, and this is a really excellent transcript. With a couple of screenshots, it would be better than the video in almost every way, though a few of the demos would lose something. (The demos start at 14'40".) The sort of people who were making worthless comments because they were confronted with a webpage formatted in an unfamiliar way won't suddenly start making insightful comments because there's a video link; they won't make any comments at all. So it's a mistake to cater to them and damage the experience for people who might have something to contribute. Video links make for shallow conversations.

Jtsummers
https://en.wikipedia.org/wiki/Bob_Frankston - CS at MIT, apparently. So he at least may have been.
kragen
Oh thanks! I didn't realize. Surely he had contact with Sussman's ideas then.
jackrusher
The reference for this is the Brad Myers paper referenced in this tweet:

https://twitter.com/jackrusher/status/1553392739986952194

(The paper is very good.)

kragen
Hey, thanks!
7thaccount
Interesting thought on Forth. I'm also unproductive in it, but I think at no fault to the language. I simply haven't had the time to build a true forth to solve a problem. I usually have some data and some transformations and maybe some API calls to make to an application and a database. Not really a good use for Forth. At least not time wise.
kragen
I got to being about 25% as productive in Forth as in C once I learned to stop trying to use the stack for local variables. Maybe with enough practice I might get to being as productive as in C, or even more so. I doubt I'd get to being as productive as in Python, which I think is about 3× as productive as C for me.

I think that if I were, say, trying to get some piece of buggy hardware working, so that most of the complexity of my system was poking various I/O ports and memory locations to see what happened, Forth would already be more productive than C for me. Similar to what Yossi Kreinin said about Tcl:

https://yosefk.com/blog/i-cant-believe-im-praising-tcl.html

Tcl is also good for that kind of thing, but Tcl is 1.2 megabytes, and Forth is 4 kilobytes. You can run Forth on computers that are two orders of magnitude too small for Tcl.

So I think we shouldn't evaluate Forth as a programming language. We should think of it as an embedded operating system. It has a command prompt, multitasking, virtual memory, an inspector for variables (and arbitrary memory locations), and a sort of debugger: at any time, at the command prompt, you can type the name of any line of code to execute it and see what the effect is, so you can sort of step through a program by typing the names of its lines in order. Like Tcl and bash, you can also program in its command-prompt language, and in fact build quite big systems that way, but the language isn't really its strength.

But there is an awful lot of software out there that doesn't really need much complicated logic: some data, some transformations, and maybe some API calls to make to some motors or sensors (or an application and a database). So it doesn't really matter if you're using a weak language like Tcl or Forth because the program logic isn't the hard part of what you're doing.

And it's in that spirit that Frank Sergeant's "three instruction Forth" isn't a programming language at all; it's a 66-byte monitor program that gives you PEEK, POKE, and CALL.

https://pages.cs.wisc.edu/~bolo/shipyard/3ins4th.html

On the other hand, if the computer you're programming has megabytes of RAM rather than kilobytes, and megabits of bandwidth to your face rather than kilobits, you can probably do better than Forth. You can get more powerful forms of what Rusher is calling "liveness" than Forth's interactive procedure definition and testing at the command prompt and textual inspection of variables and other memory locations on demand; you can plot metrics over time and record performance traces for later evaluation. You can afford infix syntax, array bounds checking (at least most of the time), and dynamic type checking.

jdougan
I always find this [1] article to be the most trenchant regarding Forth:

> "Forth is about the freedom to change the language, the compiler, the OS or even the hardware design".

> …And the freedom to change the problem.

In my employed work life it has been fairly rare that I can make more than tiny changes to the problem, making Forth not useful.

[1] https://yosefk.com/blog/my-history-with-forth-stack-machines...

kragen
Well, Yossi is probably a better programmer than I am, but I think I'm probably better at Forth than he was, and I think he was Doing It Wrong. And he sort of admits this: he was doing big-bang compilation rather than poking interactively at the hardware, he left out all the metaprogramming stuff, and he was trying to use the stack for local variables because he designed the hardware with a two-cycle memory fetch and no registers for local variables:

    : mean_std ( sum2 sum inv_len -- mean std )
      \ precise_mean = sum * inv_len;
      tuck u* \ sum2 inv_len precise_mean
      \ mean = precise_mean >> FRAC;
      dup FRAC rshift -rot3 \ mean sum2 inv_len precise_mean
      \ var = (((unsigned long long)sum2 * inv_len) >> FRAC) - (precise_mean * precise_mean >> (FRAC*2));
      dup um* nip FRAC 2 * 32 - rshift -rot \ mean precise_mean^2 sum2 inv_len
      um* 32 FRAC - lshift swap FRAC rshift or \ mean precise_mean^2 sum*inv_len
      swap - isqrt \ mean std
    ;
I've done all these things (except designing the hardware) and I agree that it can be very painful. I did some of them in 02008, for example: https://github.com/kragen/stoneknifeforth

The thing is, though, you can also not do all those things. You can use variables, and they don't even have to be allocated on a stack (unless you're writing a recursive function, which you usually aren't), and all the NIP TUCK ROT goes away, and with it all the Memory Championship tricks. You can test each definition interactively as you write it, and then the fact that the language is absurdly error-prone hardly matters. You can use metaprogramming so that your code is as DRY as a nun's pochola. You can use the interactivity of Forth to quickly validate your hypotheses about not just your code but also the hardware in a way you can't do with C. You can do it with GDB, but Forth is a lot faster than GDBscript, but that's not saying much because even Bash is a lot faster than GDBscript.

But Yossi was just using Forth as a programming language, like a C without local variables or type checking, not an embedded operating system. And, as I said, that's really not Forth's strength. Bash and Tcl aren't good programming languages, either. If you try to use Tcl as a substitute for C you will also be very sad. But the way they're used, that isn't that important.

I explained a more limited version of this 12 years ago: https://yosefk.com/blog/my-history-with-forth-stack-machines...

So, I don't think Forth is only useful when you have the freedom to change the problem, though programs in any language do become an awful lot easier when you have that freedom.

akkartik
"The sort of people who were making worthless comments because they were confronted with a webpage formatted in an unfamiliar way won't suddenly start making insightful comments because there's a video link; they won't make any comments at all."

I think "not making a comment if they don't have anything to say about the content" was the goal of the change. Reduces the noise on the page (I seldom go to page 2) and probably also the moderation burden.

(Hi Kragen!)

kragen
Hi! Glad to see you!

I understand that reducing the worthless drive-by dismissals based on unfamiliar formatting was the objective of the change. My complaint is that there are a lot of comments like mine, which I hope is not worthless and which is based on reading the entire transcript as well as watching parts of the talk, which will never be made because you can't write those comments if you just watch the video. Also, watching the video takes a long time, so many people won't bother.

I would have liked to read those comments, even if there were worthless drive-by dismissals underneath them. I think it's bad to eliminate thoughtful discussion because the conditions necessary to produce it also cause discomfort in our less thoughtful brethren.

akkartik
Can you elaborate on why some comments won't be made if people watched the video rather than read the transcript?

I certainly agree with your second point that some people prefer reading transcripts to watching videos. I used to be one of them until a year ago!

jholman
I would imagine it's because the transcript contains strictly more information than the video?
kragen
Each contain information that the other lacks.
kragen
You can't scroll back and forth through the video to compare different points where he's talking about related themes, you can't text-search to find where or whether he mentioned a particular theme, and when you're reading the transcript, it pauses automatically when you stop to think, so thinking is the default. With the video pausing requires effort, so the default is to not think.

There are things that work better in some sort of video. GUI demos, mechanical movements, some kinds of data visualizations, and facial expressions, for example.

akkartik
Thank you!

Now that you mention it, I tend to pause and rewind a lot. But I likely do miss connections that are more than a few minutes apart.

fermigier
There is a strawman argument in the talk, about the limitation (or now, default) to 80 characters wide consoles which is presented as a "proof" that we're still living in the past.

80 characters (plus or minus 10) has a justification outside the history of teletypes and VT100s: that's the optimum for readability.

Otherwise, there are good ideas, in the talk, but this particular one rubbed me in the wrong way.

jwdunne
Yes, I find it harder to read code at lengths much longer than this. 120 is quite difficult. It also makes it even harder to read if you split the screen vertically, which I do all the time.

That said, I don’t think it should be a hard limit and it’s fine if a lines a bit over, +/- 10 like you said. Certainly not something that we should contort into multiple lines just to keep under a hard limit. Unfortunately, a few auto formatters only do hard limits - it’d be interesting to see how an acceptable interval around the limit would work.

Plus, I’ve noticed the limit makes more of a difference for comments than code so I try to keep comments under that. The written word appears more sensitive to line length.

bruce343434
How do you envision an interval around a limit? The fact is that you have to draw the line somewhere. If your interval is +-20, then setting a "limit" of 80 is really just a hard limit of 100.
alpaca128
By letting the code formatter exceed the limit if it allows for more readable formatting in certain cases. Going for a 100% hard limit means sometimes it'll shuffle chunks of code around because of 1-2 characters and that just doesn't make a lot of sense.

Or in other words, by formatting the code more like a human than dumping the source tree with a blind set of rules. If Copilot is possible then so is an AI model able to consider how code actually looks on the screen.

bruce343434
How do you quantify the tipping point where a longer but continuous line is prettier than a broken line? (broken where?)
atq2119
They already answered the question: you use statistical methods (aka AI).
pizzaburek
For my personal Python projects I set a hard limit in the 94-96 range. That's wide enough that I actualy adhere to it instead of just ignoring it.

PEP8's and Google style guide's limits of 79 and 80 are way to narrow for a language with 4 space identation. However PEP says that "it is okay to increase the line length limit up to 99 characters" while Google's 80 is just a soft limit that can be broken in certain cases like long URLs.

hoseja
Optimum for whom? Nineteenth century teletype operators? I never got that argument.
goto11
Optimal for reading prose texts. I believe experiments have shown the optimal line width for reading speed and comprehension to be about 50-60 characters.

But as far as I known this have only been researched for reading prose. I doubt the result will translate directly to reading source code, which is read in a completely different way.

fermigier
Yes, experiments are lacking for code, but I believe the physiological principles (eye fixation, etc) still stand.

Also: the optimal line width for prose seems to be 50-75 characters (including whitespace). Source: https://baymard.com/blog/line-length-readability

Assuming you're using a language where indentation is either mandated (e.g. Python) or recommended (e.g. C with "One True Braces"), with 4-spaces indentation (here again, no experimental evidence, but this seems to be the norm nowadays), and you don't abuse cyclomatic complexity in your functions / methods, you have let's say between 2 and 4 indents on the left so 80 is close to the optimum.

Note: up to 100 is probably still OK from a readability POW. But this assumes that all readers are able to increase their window size from the default 80. That's probably true for people using GUIs, but may still be problematic for people with sight issues (e.g. people over 50 or with more serious medical conditions...).

jstimpfle
s/POW/POV/
lcnPylGDnU4H9OF
Reminiscent of tabs vs. spaces, this seems to be a new battle in an old war and readability is (allegedly) being held captive by both sides.
jstimpfle
I don't know what you're talking about. You sure you replied to the right comment?
debugnik
They're making a POW (prisoner of war) joke about the topic, I think.
lou1306
It's related to how reading and the human eye work. For each eye fixation, you take in about 5-10 characters. The optimal line length for books is about 60-75 characters, which takes somewhere between 6-15 fixation to scan.

Code is not like books: most lines will be shorter than the "limit", there's indentation, you're typically reading monospaced characters on a screen, and you have syntax highlighting, etc. So, bumping the length up to 80 characters is still okay, as is the occasional outlier. But regularly writing 100-120 characters-long LoCs will definitely impact readability.

faeriechangling
In a book, line breaks don't necessarily have any meaning. In poetry they might, but in a book they probably don't. In programming, line breaks OFTEN have a meaning.

Breaking up a sentence over multiple lines can TOTALLY change how it's interpreted by the computer.

Using methods to circumvent this which allow you to have a computer interpret two lines as if it's a single line can confuse and change how it's interpreted by the programmer, at least momentarily.

Refactoring your code to ensure it never exceeds 80 characters can ALSO make code harder to read, especially in modern languages that tend to be more verbose than what was seen in the TTY days.

Expanding the limit to even 120 characters and aiming the average line to be significantly shorter allows you to have a more consistent readable style across languages where you aren't doing nearly as much weird crap to force source code to fit an arbitrary character limit. You STILL have to do quite a bit of rewriting to force code to fit 120 characters, but this may be worth the readability tradeoff.

alpaca128
Quite the opposite, limiting myself to ~80 characters per line improved the readability of my code. Shorter lines mean less complexity per line and more labelled values, which then again reduces the need for comments.

Expanding to 120 character lines means I can only have one column of code on my laptop screen at the same time and only if I maximise the window. It also means I can't have 3 columns on my desktop screen. And no, horizontal scrolling or line wrapping is not an option, it's a nightmare.

hrbf
This is an argument I’ve heard over and over again throughout my coding career, especially from JavaScript and PHP developers. Interestingly enough there was a noticable overlap between this mindset and “clever” code, comprised of endless chaining, nested statements and/or internal recursion. They also never wrote code comments – at all. The defense being: “I think this way and its easier for me!” Regarding comments: “The code IS the documentation!”

Of course, they were never around 6 months later to prove if they still understood their gobbledygook then. They never had to refactor anything.

I have very rarely encountered a piece of code that would be hard to fit into 80 characters width while being readable. In fact, if done correctly, it forces you to break stuff up into manageable pieces and simple statements, to be explicit and often verbose.

But if this runs counter to a messy, ego-centric style a developer is used to and there’s no one to reign them in, it’s what you get.

lupire
That's a bizarre statement, because chaining leads to shorter lines because each call can gets its own line and there are fewer assignments.
theamk
In many (most?) languages chaining leads to longer lines unless programmer makes an extra effor to break them. Compare:

     user = manager.getUser()
     mobile = user.getPhone(PhoneType.mobile)
     area = mobile.getAreaCode()
vs

     area = manager.getUser().getPhone(PhoneType.mobile).getAreaCode()
hrbf
Good example and exactly what I meant. Having each call on its own line doesn’t solve anything if you also nest the calls.
Sohcahtoa82
what about...

    area = manager
           .getUser()
           .getPhone(PhoneType.mobile)
           .getAreaCode();
I see this form a lot in Java and Kotlin code, especially in Kotlin where a single function is just assigned to a chain of functions like the one above.
aidenn0
I have spent much of my career reading and understanding code written by others. Comments have helped me twice in that time.

Many other times the comments have made understanding the code harder.

My motto is that comments are the only part of the code you can be sure are not tested. At best they record the intent of an author at some unknown point in the past.

hrbf
> At best they record the intent of an author at some unknown point in the past.

This probably says something about the priority the teams you worked with put on documentation. Write once and never update?

aidenn0
I mean the unknown point in the past could be yesterday, but it could also be 25 years ago.
marcus_holmes
This. So much.

I can empathise - when I started out I was all about clever code. But the older I get, and the more code I have to deal with, the more I value simplicity.

The 80 character limit is a really good signal that my code is too complex and needs simplifying. The most common occurrence these days is when I have too many arguments to a function, and either need to curry it or create an options data struct. My code is cleaner for this.

And there have been many, many, times that a comment from past me to 3-months-later me has saved me an hour of reading. Code, even intentionally simple code, is not as self-documenting as it appears to be when we're writing it.

hrbf
Indeed. Comments should document intent and – wherever applicable – approaches taken that did NOT work and why. Even awkward code is sometimes okay, if there’s a comment with a clear justification. Saves hours of pointless busywork following down all the paths in a medium to large code base.
sfink
I used to be clever, and bumped up against the 80 character limit. Now it's the opposite: my avoidance of cleverness is what is causing me to bump up against the 80 character limit.

There are two reasons for lines to get long: (1) cleverness, or (2) long names.

I used to hate long names, and still can't stomach most Java code. But I've learned that if you want understandable code, you can either have short names and long comments, or long names and hardly any comments. (But please, no longer than is necessary to communicate what needs to be communicated. addKeyAndValueToTable() tells you nothing of use over add() or put(). setForwardingPointerWhileTenuring() does communicate some important things that set() or setPtr() would not.)

marcus_holmes
yeh, another signal that you're over-complicating things.

I stick to VerbNoun function naming, and usually code in Go where short (1-letter short) variable names are idiomatic.

If I can't VerbNoun a function, then I probably need to rethink it.

Part of the reason I don't do anonymous/lambda functions too happily - it's actually harder to read a stack of anonymous functions than a stack of VerbNoun named functions.

DeathArrow
I will probably get down voted, but I feel like I waisted 45 minutes, keeping watching the video and waiting for something interesting.

I think that a two minute video about live coding and visual representation, for those that don't know such things exist would have been enough.

bilekas
I have to agree, a lot of his points where just pointing out bad/inefficient practices and calling it 'bizzare' behavior.
marcus_holmes
video is a really bad format for this kind of content, imho. I can read faster than the video can explain. Even at 2x or 3x speed, it's still faster to read, especially if I'm only skim-reading to find interesting bits.
50
Like Wittgenstein, I sometimes think of our [scientific and] technological age as a bedazzlement. Jack Rusher, on the other hand, seems to point toward making technology transparent (an end in itself) and is incredibily inspiring for it. I love his Vector Field III (2017)[1] generative art piece.

1. https://ello.co/jackrusher/post/t1n-uy-wghymnohgpygxhw

agentultra
I think his take on “debug-ability > correct by construction,” has to assume that you can understand the problem or can understand the problem. In my experience this works fine when you have a mostly procedural process on a single machine. Set some watchers, step through execution, pause to inspect state. It is much harder to do with concurrency. And damn near impossible with distributed systems.

As someone who has claimed to use these kinds of tools to verify protocols, I’m curious what cases he would break this preference for debugging over reasoning. Or is the contradiction intentional?

I was big into Common Lisp for years and mostly into Haskell and Lean now. I still think debugging is fine and useful to do but I get the most bang out of being able to reason about my program before I run it. Once you have the definitions and proofs in place running the code is fine I guess but all the work is done.

Whereas with more dynamic systems the work has barely begun: now I need to bump into all of the errors in my code while it is running.

There are times when I want both and I don’t think the computing world needs to pick one and only one.

MBCook
What is a dead program? The talk never says.
grzm
While the exact phrase "dead program" isn't used outside the title, looking at his use of "live" and "dead" in the talk I think point to what he means:

At about 22 minutes:

> I want to talk about interactive programming, which is I think the only kind of programming we should really be doing. Some people call it live coding, mainly in the art community, and this is when you code with what Dan Ingalls refers to as liveness. It is the opposite of batch processing. Instead, there is a programming environment, and the environment and the program are combined during development. So what does this do for us? Well, there's no compile and run cycle. You're compiling inside your running program, so you no longer have that feedback loop. It doesn't start with a blank slate and run to termination. Instead, all of your program state is still there while you're working on it. This means that you can debug. You can add things to it. You can find out what's going on, all while your program is running.

And a few minutes later:

> So, for example, in a dead coding language, I will have to run a separate debugger, load in the program, and run it, set a break point, and get it here. Now, if I've had a fault in production, this is not actually so helpful to me. Maybe I have a core dump, and the core dump has some information that I could use, but it doesn't show me the state of things while it's running.

jussamouse
The name of the talk is a reference to "Stop Drawing Dead Fish" by Bret Victor https://www.youtube.com/watch?v=ZfytHvgHybA
revskill
dead program to me is a program without TDD
peterkelly
A dead program is one that can't be changed after it's running.

In the talk he discusses what are called "live programming" environments, as exemplified by Smalltalk and some implementations of Lisp. Instead of restricting programmers to writing code and then compiling/running it, live programming environments let you modify the program after it has started running.

If you want to make a change, you just modify the relevant bits of code and the program will immediately start using them. There is no need for a build step and no need to restart the program (which would involve losing any state that's in memory).

worthless-trash
By this definition, erlang and beam hot-reloading is the gold standard of live program.
0x445442
Or Gemstone.
worthless-trash
Do you have a link for gemstone, google is failing me. Is this GemStone/S ?
ouid
any property that you would make about the possible state of the program by looking at the program itself has now been completely eliminated. Losing the state that is in memory is not universally desirable, but it is almost universally desirable.
bgm1975
And who needs version control anyway.
vanjajaja1
Version control can be built into the system. One click to deploy the change, one click to revert it. (You could even have automatic one-box/A-B deployment of a change within code)
lmm
You end up building an ad-hoc, informally specified, slow implementation of half of a regular development environment "inside" your system. These "closed world" systems can do some amazing things, but it's very rarely enough to make up for not being able to work with standard tools from outside the ecosystem.
bdamm
Sounds like a reliability and auditability nightmare. And, in securing software systems, full bill of materials and even reproducible builds are used to lock in the software to known good versions. How could software that can change thanks to the whim of a dev in the ops environment ever be considered reproducible? Wouldn't the result be just hugely fragile code state with no known source?
jrockway
Clearly some people just don't care about that sort of thing. Look at the popularity of Jupyter notebooks, Julia, etc.

There just isn't much forcing people to engage in good software development and operational strategies. Sometimes there are regulations in the field (PCI, HIPAA), sometimes adversaries force care (state sponsored attacks on Google), but that's kind of rare. The usual forcing function is a competitor that delivers features faster than you. In that case, maybe "live programming" is worth a look. I've never seen "less bugs" on a product comparison sheet, for example.

(It's a contentious view, and I'm personally not a fan. I like it for software I'm sitting in front of, like Emacs, but hate it for things that need to run unattended on their own, which is like everything except text editors. And "less bugs" is a big selling point for me personally. Every bug I run into consumes time that I'd rather spend elsewhere. But, not everyone sees it that way.)

sundarurfriend
> Look at the popularity of Jupyter notebooks, Julia, etc.

Neither of these (inherently) have the kind of "live programming" environment that the original comment was talking about.

Jupyter notebooks have a different reproducibility problem though, with how easy it is to create hidden state. But they're not intended to be used in production at all anyway, so the problem mainly affects pedagogical and information sharing use cases.

Julia with the Revise.jl package installed and loaded comes kinda close to the live programming model - but the state is always preserved in the source file necessarily with Revise (and it's a development tool too, not one loaded in production). Since you mentioned it alongside Jupyter, maybe you meant the Pluto notebook for Julia? There too the state is always preserved, both in terms of packages (with Julia's usual Manifest file) and in terms of your code (no hidden state like Jupyter, since it's a reactive notebook).

fardo
>I've never seen "less bugs" on a product comparison sheet, for example.

Those show up, your sales department just uses the more formal names for the actual customer demands that “less bugs” encodes.

That will look like some combination of

• “We have an SLA for an annual 99.9xx% uptime and average sub x00 latency!” (so if a bug causes significant annual reduction in service, your business gets penalized),

• ”We guarantee regulatory compliance!” (so if a bug or business use causes regulatory issues, it’s your ass on the line),

• “We guarantee 24/7 same day chat or email support!” (so if a bug causes an outage, they have a warm body they can yell at, demand an explanation for their customers, and ask questions)

• “We guarantee backups, data redundancy, and worldwide data replication!” (so if a bug blows up one of your data centers somewhere, or the intern fat-fingers an accidental deletion of the production user database, the customer doesn’t even notice something went wrong)

• “We guarantee API backwards compatibility or a service maintenance guarantee until 20xx!” (So you’re forced to fix bugs, and the bug isn’t ‘your PM team might try to #KilledByGoogle’ the service you invested in building infrastructure on.)

Teams that believe in the health, care, and security of their design are far better equipped to offer the above valuable terms to customers to gain competitive business advantage, and teams with bad engineering hygiene are likely to be scared off.

bcrosby95
Production environments aren't the only environment software is run in.

Pretty much by definition, more code touches non production environments than is ever ran in a production one: everything that touches production likely passed through at least 1 other environment beforehand. Then you have the code that never made it to production due to it being buggy/etc.

williamcotton
If all you have is a hammer…
Supermancho
> Sounds like a reliability and auditability nightmare.

Why? Reliability, auditability, and testing isn't affected. This is about developing methodology and how the language syntax and runtimes can enable that.

cgarvis
Smalltalk has a change file and an ability to be source controlled. My understand is that the source controlled output is not human friendly.
0x445442
Actually there are serializable forms like TOML that are very readable.
Jtsummers
https://github.com/pharo-project/pharo/blob/Pharo11/src/Anno...

(random class selected)

It's not that human unfriendly, if you know Smalltalk (which is not a terribly hard thing to learn). But you also wouldn't interact with it in that form but rather inside of Pharo and using its class browser.

mst
I've never actually -written- any Smalltalk and can read that code just fine.
peterkelly
Excellent question.

The way I see it, there's two scenarios where this approach can be useful, which I'll address separately.

The first is for local development. When iterating on a solution to a problem, it can be a hassle to restart the program from the beginning and incur the time penalty to get to the part you're modifying in order to test it. This can include not just compile times, but also program startup, loading in necessary data, and going through a series of steps either in a UI or via API calls to get the system into a state where it's about to trigger the functionality you're working on and want to test.

For example, I'm currently working on a business workflow that has a bunch of steps that require someone to fill in several forms, and things happen after each of those. When I'm working on stage 5 of that process, testing it involves repeating all the actions necessary to go through steps 1, 2, 3, and 4. Yes, it's possible with extra effort to automate this, or to create mock state that can be used to jump directly to step 5 on launch, but that's just a manual substitute for what a live programming environment gives you automatically. A similar situation occurs in game development; you don't want to play through half the game every time you want to get to a particular part where you've changed how a particular enemy acts.

The second scenario is production. You're correct that it's important to only deploy code that's undergone proper testing and review, has a known version, and is reproducible. However this is orthogonal to the use of a live programming environment. With the latter, you wouldn't just have developers interacting with it in an ad-hoc fashion (just as you wouldn't have developers directly modifying python scripts on a production server). Instead, the ability to modify running programs can be used as a deployment mechanism. Once a set of changes has been made in a local development environment, undergone testing, and is deemed ready for deployment, the same mechanisms for updating running code that are used for development purposes can be used as a way of deploying code. And in production, the fact that you don't need to restart the process means you can avoid downtime.

As far as deployment and versioning is concerned, you can think of it as being a similar to how you would do those things for a server based on CGI scripts or similar (e.g. PHP) where every time a request is served, the file is read from disk and executed. The difference there is that all state has to live in a database, so if you have long-running processes e.g. business workflows that span days or weeks, all state transitions/control flow has to be managed manually, rather than using language features like sequencing, iteration, and function calls. With a live programming environment that supports persistence (meaning that execution state is stored in a file or database in a manner transparent to the programmer), deployment consists of adding/updating a set of objects in the data store, rather than copying a set of files to a particular location.

An example of a system that supports runtime code updates is Erlang, though it is not persistent. An example of a system that is persistent is Ethereum, though that doesn't support runtime code updates. Smalltalk and its variants support both.

Auditability can be supported by ensuring all changes to code or runtime state are made through transactions.

skissane
Why can't a "live coding" interface have auditing, access control, etc, just like any other interface? In production, disable it completely or limit its access to highly privileged users – but developers can still enjoy it in their local environments.

A good implementation of the idea would incorporate version control – when a service starts, it knows which version of the code it is running (commit hash, etc), and it has an introspection interface to report that version. If any "live coding" changes are made, it knows exactly which classes/methods/functions/etc have been changed compared to that initial version, and reports that through that interface too. You can then have a centralised configuration management system, which polls all services to find out which version they are running (and whether there are any additional changes beyond that version), and alerts if any production system is running different code from what it is supposed to be. Since your "live coding" interface is audited, you know the exact timestamp/username/etc of the change, and so anyone making inappropriate changes to production systems can be caught and dealt with.

In production, a "live coding" interface can be used to enable live patching – so you can apply a patch to a running service, without even having to restart it. Of course, the patch would be tested first in lower environments, and the interface would be invoked by some patching tool / deployment pipeline, not an individual developer.

robocat
I assume SQL stored procedures on some database work have some of those features? Seems like the sort of things that database engine developers would consider and the atomic semantics of committing stored procedures to a database lend itself to that model more closely (for databases with atomic DDL).

I worked with an Object Oriented database that had some cool features because the code was stored as highly structured data within the database itself, i.e. the code you saved was not text at all (except when dumped/reloaded to external file - similar to exporting/importing data of a database to/from a file as a textual representation).

lmm
Stored procedures are very poorly controlled in my experience, and tend to work on a YOLO deployment model, even at e.g. big financial institutions.
pjmlp
Only in FOSS databases, Oracle, SQL Server, DB 2, Informix have all the tooling as any common programming language.
lmm
Not my experience at all. Maybe the tooling notionally exists, but it's certainly not in use.
pjmlp
When people don't bother to master their tools they only have themselves to blame.

Even back on the DDJ days there were articles about them.

lmm
Maybe. Or maybe the tool has the capabilities in some notional box-ticky sense but not in a way that's actually effective in practice.
hoseja
next stop, self-modifying code, yay!
lightedman
That sounds like a shitstorm of a security nightmare waiting to happen. Someone hacks in, compromises the program, re-writes it to do whatever they want.

No thanks.

Supermancho
Docker containers, any scripting language file, etc is not a "shitstorm of a security nightmare". Security is generally not focused at the artifact level (whatever artifacts might exist).
lightedman
I don't think you fully grasped what I was implying. Let me give you an example, since I have real-world experience with this. I once made a little 2D game that would allow for live modifications to the game world and code while active, from your character avatar to the tiles (assuming you 'owned' the area) to adding code live to make things (one person made a live arcade!) - it was basically a 2D Second Life.

And it was a complete security nightmare. It had to shut down after about two weeks, due to rampant abuse. One person managed to escape the game world, then the VM which contained it, then wreak havoc on the host machine running several other instances of the game (for linked worlds.)

If you aren't focusing on security at every level, you're asking to get wrecked. You think things are secure, man can make it, man can and will break it, eventually.

Supermancho
> I don't think you fully grasped what I was implying.

I understood you. This example doesn't illustrate the same concept. You're talking about allowing RPC into a running program. The talk is about RPC to your IDE as you develop the program. These are very different situations.

The colloquialism "thinking about security at every level" doesn't mean that shipping a program on air-gapped faraday-caged hardware is the only security. Any machine running a website can have the program modified by changing the HTML (or HTML generating code) at any time. Ruby, PHP, Perl, or even Tomcat (which will reload artifacts in realtime, without some tweaks) are hobbled versions of the same concept. Elixir/Erlang and LISP coding (et al) is live coding due to the nature of the runtime.

This idea of having an interactive program, as you develop, does not preclude a hardened artifact (which has never been the problem, since swapping out a replacement that's hardened would make that pointless) but that's partly the point. Making new toys and features, the talk is about the important elements to keep focusing on and why to move development forward.

None
None
peterkelly
It's no different to someone hacking in and then copying over a modified executable, changing entries in the database, or attaching to a running process using gdb to inject code.

You still need proper access control.

spc476
How would that work in real life? It sounds nice in theory, but as the saying goes, "in theory, there is no difference, in practice, there is."

One programmer fixing a bug, one adding a new feature, are both working on the same program at the same time? Or is each one working on their own copy? And how does the merge happen if they're working on separate copies? How is the updated code moved to production? How is this "live coding" supposed to work?

nequo
> It sounds nice in theory, but as the saying goes

Listen to Ron Garret’s interview on Corecursive.[1] He sent verified Lisp code to Mars at NASA. The code failed but the debugger popped up and they were able to recover from it. Look for “Debugging Code in Space” and “Sending S-Expressions” in the time stamps.

[1] https://corecursive.com/lisp-in-space-with-ron-garret/

brundolf
I'm not sure mainstream development practices would automatically benefit from mimicking the ones used for such an extreme scenario
nequo
You're right, possibly not, but it has its place.
grzm
Here's another account of the same story: https://flownet.com/gat/jpl-lisp.html

Ron Garret (https://news.ycombinator.com/user?id=lisper) is a frequent contributor on HN.

Jtsummers
By using source code and version control. It's not magic, check out Pharo + Iceberg (their integration with git). You still end up with a live environment for much of your work, but in a way that still works well for collaboration. Just don't be foolish and think, "Oh, I can redefine this live and never commit it anywhere and that'll be fine." That's not very smart, and you don't want people to think you're an idiot, so don't do it.
ahtihn
> That's not very smart, and you don't want people to think you're an idiot, so don't do it.

Yeah, that works about as well as saying you don't need tests because it's not very smart to write bugs. You need processes to enforce these things at scale.

Jtsummers
I mean, was it that hard to get your team to adopt git or another version control system? I've only had one team that struggled with that, and they struggled with a lot of things. Everyone else adapted quickly to it. Working in Smalltalk would be no different in that regard. It's just as easy and just as hard as working with git in C# or another language. Someone forgets to check in a new source file, everyone else has a broken build. It's obvious quickly and gets addressed.
skydhash
A la Emacs, I'd say. Emacs is basically a live environment where you can edit text. While you're running it, you can add new features on the go, then save the modifications once you're happy with them. Imagine if you open apple notes and wanted to change a few things, maybe adding an interface to some services. You'd just open the live environment, code your changes, and voilà.
pjmlp
On your development environment mostly.

Smalltalk, Lisp, Scheme, Erlang, Haskell, OCaml, Java, .NET, C++ (via VS, Live++, Unreal) allow for this kind of interactivity, where Smalltalk and Lisp variants win in tooling.

Some Lisp variants, Erlang, Java (via JMX beans / JVMTI) also allow to connect to a running application in production and change its behaviour, although on Java's case it is only a subset of possible changes.

lokedhs
The sound in this video is pretty bad, but several years ago I did a talk showing how this development style work on a real project, and I think it may answer some of these questions. https://www.youtube.com/watch?v=bl8jQ2wRh6k
singularity2001
Try irc ( interactive ruby console ) with a running ruby on rails server to get a feeling for what magic is possible
akkartik
As I understand it:

* There's still a notion of production. Most people work on dev environments just like today. You do need the ability to merge code in some reliable form from one environment to another. The Clerk project mentioned in the talk does this.

* Dev environments are more fluid. You can debug issues faster because you spend less time getting to arcane parts of your program after every restart.

* It is possible to live-edit production for important incidents. It's very much a weapon of last resort, you have to be super careful, and you probably want to rehearse in staging the way NASA does with their rovers. But it has the promise to reduce the amount of time your customers are impacted in major incidents.

This talk inspired me to go build a live programming environment in my current stack of choice: https://spectra.video/w/wkDB5fsjBNBbsqKXGhGzwT

axilmar
While I understand that an interactive environment could be useful in a lot of cases, in other cases it might just be in the way...for example, when I am designing an API, usually I don't need any sort of visualization...on the other hand, if I want to explore data, then I do need visualization.

Also, his critique for my favorite language, C++, is unfair. C++ is born out of the necessity to take advantage of the hardware in the best way possible. And while it might be a mix of different ideas, it does work well in a lot of cases, and testament to that is the software we are using to communicate, the browsers, the web etc, the infrastructure of which is basically done almost exclusively in C++.

Furthermore, static typing helps in complex programs a lot. There are concrete examples around where static typing greatly helps solves complex problems. And complexity is not about algorithms only, it's also about change over time. A piece of code that does not have type annotations can become an ugly spaghetti mess really quickly, and that must be multiplied by ten each time a new developer is added to a team.

Very entertaining talk, by the way. At no point it was a drag. The presenter has a real talent for it.

alexeldeib
anyone know how the timestamps in this article were produced? format isn't great on desktop but the timestamps are interesting and I can't imagine those were done by hand...I guess the line width must be fixed so they are always correct (?) which probably also borks desktop viewing
duncan-donuts
Zoom does a similar thing with it’s closed captioning feature. Not sure if this was generated from zoom or something else but I’ve seen these sort of scripts come from automated closed captioning features.
jackrusher
I downloaded the automatic transcript from the YouTube video and wrote some code to reformat it in this way to make referencing the position in the video easier. I should probably have linked each time code to open the video at that point, but I'm a bit time constrained this week.
sfink
Yep, I've done this too. I had a video with a live youtube transcript for a talk, but in addition I had a manually written transcript from one of the attendees. She wasn't trying to make it word-for-word perfect, but it was reasonably close and obviously had better formatting.

The automatic transcript was fairly poor quality but had fairly precise timestamps. The manual transcript lacked timestamps but was high quality. So I used an approximate matching algorithm to combine them and produce a clickable version of the manual transcript where every group of words was a direct link to that portion of the youtube video. It all worked out surprisingly well. (The other piece was that I hand-inserted annotations to produce an index of various topics and concepts that I thought were significant.)

I don't know how common of a situation this is, since it requires having a high-fidelity human-created transcript. I could clean up the tools and release them, I suppose. I did this for a birthday present.

(I don't have a demo because it's a private video, sadly, and I have rights to neither the video nor the manual transcript.)

globalreset
I enjoyed watching it, but plenty of this talk is completely misguided. Had some ideas from there were so much better, surely by now people sharing them would show people not sharing them how much better they can do and overtake the industry.

Not to waste my time - most of the ideas expressed there lack in composability, and composability trumps almost everything else in the long run. Something that enthusiast of VMs, dynamic typing and runtimes don't want to understand. That's why stuff like live coding and interactive programming is always ever employed for small scope, throw-away things.

Visual programming is harder to automate, operate on, reproduce and compose and so on.

HankB99
My first exposure to computers was using punched cards to write Fortran. I wonder if the author was a classmate (at Purdue Calumet.)

Otherwise I did not have the patience to read much past that.

williamcotton
The problem is that most people love their tools more than they love to build things.

“You’ll have to pry vim from my cold, dead hands!”

Ologn
I started using vi in 1989. I still use vi, almost every day. I've had to learn my way around I don't know how many text editors, IDEs and so forth over time, often to throw that away when the new thing comes out. Not so with vi - most of what I was doing in 1989 in vi I am still doing in vi.
galoisscobi
These two things don’t necessarily have to be conflated.

I rarely touch my vim config but at the same time during development, I write mini-programs (using macros, reflexes and buffer operations) in vim on the fly, that can write code for me or perform refactor stuff in ways that IDEs can’t typically do. I spend little to no time updating my vim tooling.

williamcotton
Jack is looking towards the inevitable paradigm shift away from a primarily text-based programming to whatever the future may hold. It's to be expected that most people will say they are perfectly productive with how things are thank you very much.
kibwen
I'll be happy to use a better paradigm than text when someone comes up with one. In the meantime, all the attempts to replace programs-as-text have failed to result in any actual improvement, save for niches like e.g. StarLogo that cater to beginners.
qu4z-2
I'm sure it's inevitable, but I'd like to point out it's been inevitable for at least 50 years.
AnimalMuppet
Yeah, well, "whatever the future may hold" isn't enough to get people to switch. I'm at least somewhat productive with how things are. You want me to switch? Show me how to be more productive with something that is concretely available today. You have something that may be revolutionary sometime in the future? Then I'll care sometime in the future, if it turns out to be actually revolutionary.

I can't be productive on vaporware and dreams.

yazzku
The "it works on my machine" meme killed me.
noobermin
I made my way through part of the talk on youtube, it was interesting but far too much for me to stomach. It really drove it home when he flashed slides of graphic organizations of complex data. While some parts might be better than a textual description, graphic representation of data needs to be useful, that is, you need to be able to look at a graph and glean information from it. At the section where he flashes an image of a re-imagined periodic table[0], he immediately exclaimed "look how beautiful it is!" and I was immediately instead confused. What does this image show? If it is a periodic table, can I see the periodicity? For example, can I tell two elements have electrons in the same shell by just looking at it? Can I see if two elements have the same filling of the subshell, and thus will covalently bind in similar ways? Can I tell which elements are more electronegative? I can glean all that from the "boring periodic table," which yes is a graphic representation and cannot easily be written in sequential text, but this new representation doesn't give me any of that at least as far as I can tell. The desire to represent an already existing set of data in a new graphic representation that seems to give no value apart from being novel is not helpful.

I can extrapolate this down. A lot of things in life are the result of historical evolution; that is just how things are. While that history can lead to problems, it is just the way things are. And yet, it should be beyond obvious that not everything new is good. For example, while sure the examples he gave were clunky, I can say with absolute certainty that there are times where playing games on my NES is more enjoyable sometimes than playing games on my "supercomputer," because on my NES I can play until I'm blue in the mouth, whereas on my phone, I have to stop every 2 minutes and watch an ad that makes my phone even hotter than the console. When what I value is "having fun" instead of "newness" or "shiny graphics with anime girls" I can see that an older device is better. Should I conclude that "advancement in technology is bad for enjoyment and for my sanity"?

No, because I am not that shallow in my reasoning here. It's clear that "old" vs "new" for my NES vs. mobile game comparison is really a complaint about the change in monetization models, and thus a different user experience for the gamer. In fact, the "old" vs. "new" argument obscures the real difference worth consideration, substituting the argument for a fight between nostalgia and novelty.

"Old" and "new" should never be what anyone focuses on because more often than not, such labels obscure the true conflict. Really, certain people like the presenter value things other than "familiarity," things like "introspection" and "graphical representation" and "concurrency," and he's mad people resist what he likes. Thus, he chalks it up to others stuck in their ways, clinging to "history." The thing is "familiarity" isn't the only argument people have for why they don't use clojure or reactive graphical displays: "familiarity" is often a stand in for other things that the people he critiques value, things like their time and latitude to learn new things, or the fact that indeed, some things are actually better expressed in text and as a sequential program than visually or concurrently, or that there are cases where those models are still infeasible on a computer even today (none of my electromagnetic simulations other than the most simplest will work in a reactive notebook because while computers are much faster, being much faster means I just do larger simulations that can no longer actually run in a reactive setting well). That to me is the more (pun not intended) valuable discussion, a discussion of the actual "innovations" and the values you have in mind when you evaluate them. Like, sell me on why I even care about introspection in a computational physics simulation code when I think I'm doing fine without it. That to me is the more interesting discussion, a discussion of values.

But that's the problem, just making another talk about how modern lispy introspection or something is cool is just another technical talk, and it certainly feels much less cathartic than painting everyone else who hasn't adopted your reactive notebooks as being luddites clinging to their VT100 emulating terminal windows. But that to me feels like where the actual meat is, a discussion of boring technical topics because there I can respond with actual concerns or reasons I can't use this or that, and such a discussion would actually be more interesting and productive for me and for people who want to sell newer paradigms.

[0] https://youtu.be/8Ab3ArE8W3s?t=1359

peterkelly
The ideas he's promoting aren't new, and in fact predate even the NES.

They date back to Smalltalk, which was created at Xerox PARC in the 1970s: https://www.youtube.com/watch?v=uknEhXyZgsg

bradrn
> At the section where he flashes an image of a re-imagined periodic table, he immediately exclaimed "look how beautiful it is!" and I was immediately instead confused …

As it happens, this particular version of the periodic table is my own favourite. It’s perhaps easier to see why it’s so nice if you look at a larger version [0] — the periodicity is extremely obvious, not just in the elements themselves but also in the arrangement of the d- and f-blocks (which are in fact obscured in the usual periodic table). On the other hand, I suppose it’s true that trends in electronegativity etc. become somewhat less obvious. As with everything, it’s a trade-off.

[0] https://upload.wikimedia.org/wikipedia/commons/c/ce/Elements...

drewcoo
The problem is not the programs.

The problem is not the tools.

The problem is that the tradespeople and craftsfolk who make them are, whether intentionally or not, wage slaves.

They do not make for the joy and art of making, but constrained by monetary need.

They do not seek to grow as makers and share what it is to make because . . . no one would pay them for that.

If we want better tech and better ways of doing tech, we need to start being paid like smart people who can not only do what management can't, but also we insist on doing it in a way that's creative. In a way not managed as a commodity.

Here on HN, we see hacker (at best) as commodity. This is not the right community for the message, imo.

sfink
It's a persuasive argument, but most of the "dead" tools mentioned in the talk were created in the era where the creators were inspired more by the beauty of their creations than their relevance to some bottom line. C C++ Pascal Python Go .NET ML Docker etc. were all made by people who saw a niche and sculpted something new for the joy of it. (Not just for the joy, but still.)

The current "...but does it monetize?" culture is a recent thing. For most of the history of computing, nearly everything was done by starry-eyed idealists and outcasts who were in love with exploring the new conceptual frontiers opened up by technology. Sure, even then there were Larry Ellisons out to skin the selkies and sell their pelts to the highest bidder, but they weren't driving the field. The vast majority of developments came from people on BBSes or MUDs or just in their private basement rooms, who delighted in coming up with working demonstrations of crazy new ideas and showing them off.

Silicon Valley was the Kingdom of the Geek, before the Geek had been tempted to crawl into the gilt cage and be locked inside by the army of MBAs and VCs. But we did it to ourselves—where once we might show off a live coding hack or a self-modifying set of scenes in a MUSH, now we show off our new Tesla or whatever the latest variant of a giant cell phone on wheels is.

(I don't disagree with what we should be moving towards.)

pie_flavor
People are still using the same screwdrivers and screws they used a hundred years ago. Phillips head isn't even that good - you can easily strip the head over a few uses, or even one, if you don't know what you're doing. There have been innovations in big machines (analogy parallel would be Hadoop or whatever). But at the small level, people are still using the basic hand implements they've always used because they always work, they work everywhere, everyone else knows them, and it's pretty hard to get the design for new ones wrong.

Like the author says, his ideas don't mean you should use Smalltalk or Lisp. Just that you should demand features, like how it took until Rust for sum types to escape functional. But the reason you shouldn't use Smalltalk is also the reason why languages like Smalltalk aren't going to get made: because when you're making a general-purpose language, it is extraordinarily hard to paradigmatically improve on what came before, and it's very easy to just get it all wrong and make a pile of trash that makes developers at the few companies that adopt it mad at you. Even Rust is not that amazing in this regard; its wild and new data model is in fact the exact same one you were supposed to be using in C++, just minus the ability to refuse.

Everyone smart who was able to make these things has been snapped up by big data where their talents produce the most direct value.

robocat
And flathead screws often work when somewhat rusty (easy to clean slot). Good luck with many other screw head designs which are easier to use only when in new condition.
selcuka
Square (Robertson) heads are fine, even when fairly rusty.
gumby
Screws have evolved a lot over the years, in particular over the last century. Instead of precision screws being made from a pattern, screws like metric ones have specifically designed pitch and thread shape.

We still use wheels but they don’t look like ones on old waggons either.

kragen
The Whitworth thread form (55° thread angle, a thread depth of 0.640327p, radius of 0.137329p, rounded roots and crests, and a set of standard pitches p) was designed in 01841; that's when screws started being made with specifically designed pitch and thread shape. The Unified Thread Standard used today is virtually identical to the Sellers thread from 01864, though the metric pitches were added later, and standardized internationally in 01898, and some problems in the UTS were ironed out in 01949 in the wake of wartime Whitworth/Sellers incompatibilities.

https://en.wikipedia.org/wiki/Screw_thread#History_of_standa...

So I would say screw threads have only evolved very slightly over the last century. Screw heads evolved quite a lot during that time, though: Robertson heads are from 01907 (just outside the century!), Phillips got his patent in 01932, hex-key screws are from 01936, and then we have hex head, Pozidriv, Torx, external Torx, and literally dozens of others. Hex heads pretty much replaced square head screws sometime around 01960.

(Also, bolts replaced rivets for structural steel about 100 years ago, due to improvements in heat treatment that couldn't be applied to rivets.)

More interesting to me is the increasing number of snap-fit fasteners, which can often replace screws with greatly improved convenience at lower cost. These aren't always applicable, and sometimes they're designed badly (Ramagon and USB plugs come to mind) but when they're designed well they often have much longer life than screw fasteners. Also, they don't vibrate free the way screws do (without lockwire or loctite, anyway).

froh
for metal screw threads and nuts I'm with you.

for other materials there has been more innovation later for threads, and for special uses of metal screws as well, for example self-tapping screws and coating of screws.

side question: why do you prefix years with a leading zero?

edit "long now", saw it in a sibling comment. TIL

kragen
Oh interesting! I didn't know that! What were the key innovations and when did they happen?
froh
while I know they exist and I can point to an interesting collection I don't know the history of how they were invented and when:

https://www.celofixings.com/1663-self-tapping-and-self-drill...

kragen
Thanks! But this just seems to be an empty page with a cookie clickwrap extortion box on it?
froh
this is what it says to me:

> Self-tapping and self-drilling screws

> CELO is leading the self-tapping and self-drilling screw market. > CELO's self-tapping and self-drilling screws offer the widest selection for installations of joining metals, PVC profiles and aluminium sheets.

> In this section you will find our screws in all sizes, recesses, head types and coatings.

and then pages over pages of a catalogue.

it was the first catalogue link searching for "self taping and self drilling screws"

sneak
This is my first encounter with Long Now year formatting in the wild. Bravo.

Do you use it when not speaking of historical topics, too?

kragen
Sometimes!
koyanisqatsi
What kind of direct value are you thinking of? I don't think most data scientists and ML engineers could write compilers or tensor algebra frameworks and gradient based optimizers.
lasfter
I think OP is saying the people who can write compilers or tensor algebra frameworks and gradient based optimizers get snapped up by big companies, not that every one who works in big companies is so capable.
koyanisqatsi
That makes sense. Thanks.
nrclark
FWIW, the patent for Phillips Head screwdrivers states that the pattern is intentionally designed to cam out/ strip instead of allowing over-tightening.

It’s an obnoxious feature, but it’s intentional. Not some kind of “this is old and therefore bad” thing.

alwayslikethis
This seems to be a misconception. It actually doesn't say that.

"[..] and in such a way that there will be no tendency of the driver to cam out of the recess when united in operative engagement with each other."[1]

Wikipedia says:

"The design is often criticized for its tendency to cam out at lower torque levels than other "cross head" designs. There has long been a popular belief that this was a deliberate feature of the design, to assemble aluminium aircraft without overtightening the fasteners.[14]: 85 [15] Extensive evidence is lacking for this specific narrative, and the feature is not mentioned in the original patents.[16] However, a 1949 refinement to the original design described in US Patent #2,474,994[17][18][19] describes this feature. "

1. https://worldwide.espacenet.com/patent/search/family/0216985...

lupire
Your last sentence contradicts the first.
alpaca128
Not necessarily. It only being mentioned in a patent refinement could also be a case of "it's not a bug, it's a feature".
jeremysalwen
Wanted to add a reference here to the wikipedia, which claims this is a myth[1]:

> Despite popular belief,[2] there is no clear evidence that this was a deliberate design feature. When the original patent application was filed in 1933, the inventors described the key objectives as providing a screw head recess that (a) may be produced by a simple punching operation and which (b) is adapted for firm engagement with a driving tool with "no tendency of the driver to cam out".[3]

> Nevertheless, the property of the Phillips screw to easily cam out was found to be an advantage when driven by power tools of that time that had relatively unreliable slipping clutches, as cam-out protected the screw, threads, and driving bit from damage due to excessive torque. A follow-up patent refining the Phillips screw design in 1942 describes this feature

[1] https://en.wikipedia.org/wiki/Cam_out

the_only_law
Funny I tend to strip them when they’re already over tightened.
axiolite
An impact driver does a much better job than a drill or hand tools at loosening them without cam-out, whether previously damaged or not.
daveguy
This is true.

Also Note: the opposite is not true. Tightening using and impact driver is a great way to cam-out and damage a Phillips head screw. That's why torx style is much more popular with impact drills.

cvarrick
It was an engineering response to the introduction of power tools in manfacturing.

Slot head tools/fasteners transit too much torque which often resulted in the fastener heads shearing off when used with power tools.

Rather than build a complex torque limiter into the tool (which would need adjustment based on use case), they built it into the fastener.

bartread
Would definitely have made sense back in the day. Nowadays most cordless drills, even cheap models, have that exact torque limiter mechanism built in so you can dial in the torque you need with a simple twisting collar - use it all the time.

I hate Phillips Head screws, second only to flat head screws. They're "fine I guess" for some low torque applications, but even here they're not great. Was at GF's house the other day trying to screw through 18mm ply with some Phillips screws, which were all she had in the length I needed. Couldn't even get them through one sheet with predrilled pilot holes without them camming out and stripping the heads. Awful. Ended up making a trip to Screwfix to get some posidrivs.

grzm
Video from Strangeloop 2022: https://www.youtube.com/watch?v=8Ab3ArE8W3s
apienx
"Docker shouldn't exist. It exists only because everything else is so terribly complicated that they added another layer of complexity to make it work. It's like they thought: if deployment is bad, we should make development bad too. It's just... it's not good."

Jack's not wrong. But better to look at it as throwing (increasingly cheaper) compute at the it-runs-on-my-machine problem.

SoftTalker
Docker is a resoponse to the rejoinder of "works on my machine" when there is a production problem. Rather than working out why there is a problem in production, it's bascically a way tojust run the dev machine in production. Never been sure why anyone thought that was a great idea, but I guess it does solve that one issue.
dqpb
Docker is an interesting thing. I really love what I'm able to do with it. But I despise when people make it a necessary component of the development environment.
esjeon
It's not really about Docker. If the software was easy, we wouldn't need Docker this much in the first place. But software sucks, so we wrap it in Docker, but now Docker sucks, because we simply gathered all the shits so that it spills everywhere now. Also, some of the shit is from Docker itself.
doctor_eval
When I lived in the Java world, Docker solved a heap of problems because Java has so many external dependencies - you needed the JVM, all its libraries, and all sorts of configuration files for those libraries.

But how is a docker image different from, for example, a statically compiled single-binary Go executable? Because when I work with Go, I tend to think that if what I'm working on requires something defined from outside the binary, I'm doing it wrong.

So is Docker a solution to problems that are inevitable, or is it just a solution to problems caused by other solutions?

These days I tend to think that something like Firecracker is more likely to be a solution to deployment problems than Docker is, but I haven't tried it yet...

lmm
> When I lived in the Java world, Docker solved a heap of problems because Java has so many external dependencies - you needed the JVM, all its libraries, and all sorts of configuration files for those libraries.

Lol, no you don't. You need a fat jar and you need to install the JVM on your servers (and, sure, maybe upgrade it once every three years). In the early days people actually used Java to do the same thing that docker does, by having an "application server" that you would deploy individual java apps into, before realising what a bad idea that was.

> So is Docker a solution to problems that are inevitable, or is it just a solution to problems caused by other solutions?

It's a solution to the problem that Python dependency management sucks. Unfortunately the last 6 or 7 iterations of "no, Python dependency management is good now, we've fixed it this time, honest" suggest that that's inevitable.

doctor_eval
> It's a solution to the problem that Python dependency management sucks. Unfortunately the last 6 or 7 iterations of "no, Python dependency management is good now, we've fixed it this time, honest" suggest that that's inevitable.

I didn't mean to imply that I think Docker exists solely for Java; I didn't realise that Python has the same problems, but it's unsurprising, and there are plenty of other languages/platforms that probably have the same problem. My point was that I have started to think that Docker is a solution to a problem that probably shouldn't exist.

> Lol, no you don't. You need a fat jar and you need to install the JVM on your servers (and, sure, maybe upgrade it once every three years). In the early days people actually used Java to do the same thing that docker does, by having an "application server" that you would deploy individual java apps into, before realising what a bad idea that was.

I'm surprised that you advocate for deploying JVMs to individual production servers; given that Docker exists, per-server JVM installs are quite possibly the worst possible way to manage Java deployment in a production environment. Give me Docker over this any day. Docker is great for Java apps.

As someone else pointed out, one of the benefits of Docker is that you get rid of the "it-runs-on-my-machine" problem: if you want your application to run reliably throughout dev, test and prod, then you must include the JVM in the distribution, because you can't otherwise guarantee that the JVM you're running on is the one you developed and/or tested on.

Don't even get me started on JavaEE: an operating system built by consultants, for consultants.

pjmlp
The Docker, Kubernetes, WebAssembly folks are now having their go re-inventing JavaEE, poorly.
kuramitropolis
Kubernetes mostly.
lmm
> I'm surprised that you advocate for deploying JVMs to individual production servers; given that Docker exists, per-server JVM installs are quite possibly the worst possible way to manage Java deployment in a production environment. Give me Docker over this any day. Docker is great for Java apps.

Using Docker doesn't solve any problems though - instead of installing the right version of the JVM on all your servers, now you have to install the right version of Docker on all your servers.

(In practice if I had more than a couple of servers I'd use Puppet or something to get the right version of the JVM on all of them, sure. But you have the exact same problem when using Docker too).

> As someone else pointed out, one of the benefits of Docker is that you get rid of the "it-runs-on-my-machine" problem: if you want your application to run reliably throughout dev, test and prod, then you must include the JVM in the distribution, because you can't otherwise guarantee that the JVM you're running on is the one you developed and/or tested on.

In theory, sure. In practice, the JVM is tested and backwards compatible enough that those problems don't happen often enough to matter. You can still have "it works on this machine but not that machine" problems with Docker too - different versions of Docker have different bugs that may affect your application.

> Don't even get me started on JavaEE: an operating system built by consultants, for consultants.

I already said it was doing the same thing Docker does :).

doctor_eval
> In theory, sure. In practice, the JVM is tested and backwards compatible enough that those problems don't happen often enough to matter.

In my experience, JVM upgrades can and did cause us all sorts of problems, both subtle and not-so-subtle, depending on how big the upgrade was. Docker resolved a lot of that pain for us by making it possible to upgrade piecemeal, on a per-service basis, when we were ready.

> You can still have "it works on this machine but not that machine" problems with Docker too - different versions of Docker have different bugs that may affect your application.

I haven't experienced this, but in any case, the joy of Docker in a JVM environment with many applications is that you can pin the JVM for each individual application; you aren't forced to use whatever JVM is installed on the machine. This gave developers more freedom, because they could deploy whatever JVM environment they needed.

You can get away with more ad-hoc solutions if you're just one person, but once you have a team of people and a long list of JVM services, you need to be able to delegate control of as much of the operating environment to the developers.

As I said originally, this problem doesn't exist with Go because it emits static binaries that don't generally need additional support files, which is why I'd love to see more discussion on dynamic deployment of binaries instead of docker containers.

lmm
> the joy of Docker in a JVM environment with many applications is that you can pin the JVM for each individual application; you aren't forced to use whatever JVM is installed on the machine. This gave developers more freedom, because they could deploy whatever JVM environment they needed.

If you ever find yourself needing to do this, something's gone very wrong. (Maybe you're using a dodgy framework that mucks around with JVM internals?) You can download jars from 25 years ago and run them on today's JVM no problem, to the point I have a lot more faith in JVM backward compatibility than in Docker backward compatibility.

> As I said originally, this problem doesn't exist with Go because it emits static binaries that don't generally need additional support files, which is why I'd love to see more discussion on dynamic deployment of binaries instead of docker containers.

I actually think the way forward in the long term is unikernels. Given that people are mostly going for a one-application-per-VM/container model, most of what a multiuser OS is designed for is unnecessary. For the cases where you do need isolation, containers aren't really good enough, VM-level isolation is better. And for the cases where you don't need isolation, you might as well go full serverless.

doctor_eval
> If you ever find yourself needing to do this, something's gone very wrong.

Not at all. We simply didn’t want to upgrade, test and redeploy 100+ applications every time a developer wanted to use the latest JVM on just one of them. It made the upgrade process much more incremental, predictable and safer than if we just upgraded JVMs for everyone all at once.

> You can download jars from 25 years ago and run them on today's JVM no problem

Yeah but you may not be able to compile the source code for them. Especially if you have old code that uses things like xjc or SOAP.

EDIT: Quekid5 below points out that this wasn’t true for Java 8->11 upgrades, and it was actually the desire to upgrade to 11 and to get on the faster Java release train that really made Docker images our deployment system of choice.

> I actually think the way forward in the long term is unikernels.

On this we appear to agree, which is why I mentioned firecracker in my original post:

> These days I tend to think that something like Firecracker is more likely to be a solution to deployment problems than Docker is, but I haven't tried it yet...

https://firecracker-microvm.github.io/

Quekid5
> now you have to install the right version of Docker on all your servers.

No, you don't. You might not even have to install Docker, e.g. Podman will probably do just fine. "All" docker is doing is calling some kernel code to set up namespaces, etc.

With very few exceptions the only thing you might have to worry about is the kernel version, but given Linux's historical compatibility story there and the fact that the JVM doesn't really rely on any esoteric kernel features, you'll also be fine there.

With the JVM the changes around modularization from 8->11 were hugely distruptive such that you couldn't just run any old 8 program on 11, so you couldn't upgrade the JVM unless all the JVM-based stuff running on the server was upgraded in one go, etc. etc.

doctor_eval
Yes I was going to say Docker is just a wrapper around exec so the list of things that can go wrong for Docker seem a lot smaller than those that can go wrong with Java.

That said, Docker networking… :(

0x445442
Sorry but from a Java perspective I fail to see Docker as an improvement over deploy war.
doctor_eval
Well for one thing Docker tends to keep running for a long time while the several JEE App servers I used all needed rebooting after a few deploys just to keep them running. I mean ... don't even get me started, Java app servers were a nightmare and I was so very happy when we were able to ditch ours and go back to a sane architecture.
0x445442
Hmm, WebSphere was solid as a rock when I used it in production.
doctor_eval
History hasn’t really been kind to JavaEE though.
vbezhenar
I live in Java world and docker solves exactly 0 problems for me.

Maven or Gradle deals with assembling all the necessary jars into a single directory (or uberjar if you want, I don't like it).

JVM is another directory.

Running application is a shell script of 2-3 lines.

So my Java application is: directory with JVM, directory with jars and start.sh.

It works almost everywhere. It's simpler than docker. I can replace JVM with windows version and start.sh with start.bat and it'll run on Windows. Natively. Can't do that with Docker.

To build my application one would need to have maven (one directory), JVM (another directory) and project sources. Set up two env vars (JAVA_HOME, PATH) and run mvn verify. That's about it. Windows, Linux, macOS, doesn't matter.

Single binary is simpler than a directory with some files. But not much simpler.

I use docker, because that's the way things are done nowadays. I think that for personal project which does not need Kubernetes, I wouldn't use it.

bjconlan
I totally agree with and think we've all accepted it's (docker/oci's) place regarding deployability on infrastructure not managed by java developers.

Recently I've done a few migration projects and the biggest pain-point (not really a pain but not k8 friendly) is the containerization of JEE servers/services. As these solved most of what containers provide (deployment wise) albeit only for java. 'DevOps' generally killed this (and the related tech debt) but it's hard to validate the utility of any of this as the 'generalized' solution feels like it's compromised down to a point of convenience.

In saying that I do enjoy having a deployment platform that is always going to be Linux(like).

I don't think I would use it either for a personal project; but I would perhaps investigate how hard it would be to add some level of "kublet" (cri) like jvm integration

verisimilitudes
It's a shame he doesn't mention Ada, which is static but oh so nice about it.

Right, but the problem is – as Peter Harkins mentions here – that programmers have this tendency to, once they master something hard (often pointlessly hard), rather than then making it easy they feel proud of themselves for having done it and just perpetuate the hard nonsense.

Yes, see the entire history of UNIX. I convinced someone programming was a bad choice for his major because of this stupid attitude.

I'm still reading, but I like how he takes issue with what currently passes for machine text. Some of my work covers the same problem. I should send him an e-mail.

moonchild
In what respect is ada nicer about its staticness than, say, an ml?
bioemerl
If you ever want a fun time, suggest a common sense usability improvement for Linux in a Linux community and watch them explode.
verisimilitudes
I like pointing out that millions of lines of code in the Linux kernel have no real memory exhaustion strategy beyond randomly killing a process. Those are millions of lines of code, few of which are reusable, and they do so very little.
theamk
Well, there are lots of replacements for common commands like "ls", "grep", "find", even "cd", and some of them are pretty popular. And there is a wide variety of shells and terminals. No one is complaining about them and the worst you get is "I don't care/not my cup of tea" attitude.

Of course the key idea is to keep compatibility with existing world, for example by choosing a new command name ("rg"? "ack"?). If you just take over the existing name that has been used for years and break existing scripts, people will be unhappy.

mwcampbell
> The reason I disagree with this position is because the visual cortex exists. [...] There's no reason to eschew [graphics] when it comes to program representation.

Not everyone can take advantage of the visual cortex. Please don't take away one area where we're on a fairly level playing field.

akkartik
https://en.wikipedia.org/wiki/Harrison_Bergeron
mwcampbell
That story is clearly absurd. But in the real world, it might not always be such a bad thing to hold back the runaway feedback loop where the people with the most advantages gain even more.

If non-textual program representations do catch on, based on the unchecked assumption (as in this talk) that everyone has normal vision, then figuring out accessibility for future non-textual program representations will be good job security for someone, if that work gets funded. I just hope that no blind programmers lose their jobs in the meantime. (Personally, I could probably muddle through with my limited vision, albeit possibly with lower productivity.)

I should be used to people like me and some of my friends being routinely overlooked, as if we don't exist or can be relegated to a footnote, but sometimes it gets to me.

akkartik
I'm sorry to hear it :/

I've been trying harder to provide alt texts in images, for what that's worth..

nigerianbrince
The easier it is to do engineering, the quicker we will be able to stream a camera into your brain.
switchbak
So don't leverage a powerful capability because a small number of people can't use it?

That's like saying don't build sidewalks because some people can't walk.

We already lean heavily on the language centers, I don't see why we shouldn't lean on the visual and spatial ones.

rini17
I think visual cortex did not evolve to process dense symbolic information, with the emphasis on symbolic. Dense diagrams are much more difficult for most people than plain text arranged in lines or in a grid.
skybrian
It's not very skimmable. Some headings would help a lot here.
williamcotton
The basic gist is that our tooling has stagnated (and even regressed) but that there is some great work happening outside of the mainstream.
grzm
The presentation is great. Jack Rusher has a lot of energy. I recommend the video if you have time. The notes in the transcript are useful, too; both expansion of what he said as well as references to material.
tomrod
Fantastic talk. Loved it! Thanks for the post and the commenters who recommended it.

Formatting was terrible -- even when viewing source!

This made it somewhat readable

    import requests
    from bs4 import BeautifulSoup
    url = 'https://jackrusher.com/strange-loop-2022/'
    bs = BeautifulSoup(requests.get(url).text)
    muh_text = ' '.join([x for x in bs.stripped_strings if not x.startswith('00')])
    print(muh_text)
remram
I went with this JavaScript:

    let c;
    for (let p of document.querySelectorAll('body>p, body>div')) {
      if (p.classList.contains('aside')) { c = undefined; continue; }
      p.querySelector('span.time').remove();
      if (c) {
        c.innerHTML += ' ' + p.innerHTML; p.remove();
      } else { c = p; }
    }
And this CSS:

    body p { width: 100%; }
    body div.aside { width: 100%; border: 1px solid black; }
I'm thinking maybe that page was supposed to be embedded somewhere, next to a video maybe? It wasn't meant to be read like this right?
tomrod
Really shoddy DRM maybe?
jackrusher
Definitely not!

Sorry you hated the formatting. The transcript is meant to be an assistive technology for the video, and a place to put extra notes I couldn't fit into the time I had. Ideally, the transcript would scroll as the video advances and the timestamps would move the playhead to that part of the talk, but I haven't time this week to do as much hacking on that as I'd like.

tomrod
Absolutely no issue! I've only just come across your work and am astounded at both its breadth and depth.

I thought perhaps the page wasn't owned by you, and that someone had ripped it from subtitles or something similar.

The link for the article looks to have changed; before it was to a website that had text next to timestamps.

jmartin2683
Watched the video of this on YouTube. Overall a great talk, though I don’t agree with many of his conclusions. He seems to argue that ‘move fast and break things’ is the way forward, and that big dynamic runtimes allow for this with great runtime debugging etc. I prefer to catch my bugs at compile time, and have found this to be a far more reliable path to actually finishing a software project.
jrvarela56
I didn't get that from the talk. What the author meant (I'm also extrapolating from advocates of dynamic languages) is that quality increases with iteration. By changing and running your program, you can find edge cases you hadn't thought about and modify it to make it testable, modifiable, easy to inspect/visualize, etc. An environment that reduces friction required to tinker with a programs allows you to make them robust too - if you so wish. If you just want to play, then you're free to do that too (and the creative process benefits from being able to do so).

I'm not advocating for only using this workflow, ideally we could add types too. Compilers can enable an iterative workflow (Elm comes to mind), but I find myself sprinkling types as I go exploring how my program will accomplish the task (TS without strict).

The pendulum has swung to type-everything-first and I'm not sure it's the silver bullet we're looking for.

joe-user
I watched the talk last week, so perhaps my memory's a bit off, but "move fast and break things" was not the takeaway that I got. I thought of it more of "problems are going to happen, being able to debug them is important, and there are better tools available for dealing with that than what's common".

Additionally, I don't recall if he said it in the talk, but it's been my experience that type-based bugs often surface early and are generally incredibly cheap compared to other classes of bugs (functional bugs, logic bugs, security bugs, etc.).

kibwen
It's also just the wrong place in the stack to add these features. It's not a language concern, it's a platform concern. If I'm running a program in a web browser, it doesn't matter what language it's written in, I can pause the program and interactively explore it via the browser console. We should have the same thing for native apps on operating systems in general, and they should be native to the OS (provided by the OS vendor themselves) and not require any modification to the program (or any special programming language) in order to use.
jmartin2683
I agree this would be very cool. I can’t count the number of times I wish I had runtime interactive debugging in a repl along the lines of pry in Ruby in just about every language that doesn’t have it.

That said, gdb etc are pretty awesome too if it’s an option

mwcampbell
To what extent can native-code debuggers, like lldb, fill this need?
pjmlp
History has proven it only works on language platforms, and as Dan Ingals asserts, the OS shouldn't be there.

Anyway a modern OS where this is partially there, is Android.

kibwen
History has proven the exact opposite, as the failure of Smalltalk and the ascendancy of the web browser demonstrate. And as I assert, Dan Ingals is incorrect.
pjmlp
Last time I looked into it, the Web browser is a OS agnostic platform and JavaScript influence in tooling traces back to Smalltalk via SELF influence.

There are even two Smalltalk like development experiences Web browser based, Amber Smalltalk and Lively Kernel, the latter from Dan Ingals.

The OS should be an implementation detail of language runtimes, as proven by serveless computing and cloud native development, who cares if those runtimes run on top of an OS, bare metal or a type-1 hypervisor.

none_to_remain
This pretty much means debugging a running process' assembly code

You can already do this but it is a difficult way to play

peterkelly
Agreed on static type checking - I also consider it extremely important. However I don't see it as being incompatible with the philosophy of live programming. Smalltalk relies a great deal on dynamism, but I believe it would be possible to create a language and environment that both enforces static typing and supports live programming + persistence. It's an open research problem though.
dclowd9901
In web, I’d kill for a way to debug an error by replicating user’s state and play back actions and requests while stepping through an uncompiled version of the application running. I think I could solve bugs into infinity if I had that kind of power.

This kind of thing shouldn’t even be difficult, yet I have never been anywhere that has this kind of live code retrospection.

sfink
https://replay.io ?
grzm
Fulcro (a library for Clojure/ClojureScript web applications ) has a way of doing this, and I'm sure it's not unique in this aspect: I just don't do enough work in this space to be familiar with the other offerings. This type of feature is valued in the Clojure community, so I wouldn't be surprised if reagent has something like this as well. And this type of thing isn't unique to Clojure, either.

* https://book.fulcrologic.com/#_install_fulcro_inspect

* https://reagent-project.github.io

geokon
Ironically one of my major annoyances in debugging Clojure is that stack traces don't come with a program state that can be inspected (as you get in ELisp or GDB)
grzm
Agreed. Having a proper condition system in Clojure would be really great.
jackrusher
. Sadly, the JVM unwinds the stack before returning the exception, which makes doing the right thing very hard.
geokon
I'm probably missing some subtlety. I'd think you could have some "debug mode" layer where the Clojure runtime catches exceptions. Basically wrapping every exception in Clojure with a try/catch, and doing a try/catch on every interop call

It's not ideal having two different modes (like a C++ Release/Debug) - but it'd be better than the current situation

Maybe this is what CIDER's debug macro is actually doing - I always forget to play around with it :) I'll need to try it in the future.

btw, thanks for your work. I really appreciate the stuff you've shared and it's nice to know someone else also uses thing/geom :))

Has the highly decoupled "mini-library" thing/geom architecture influenced Clerk? I'm been meaning to try it out - but notebooks always feel like they come with some ecosystem lock-in (esp if it's a company trying to make money - ie. Nextjournal). It'd guess it's part of why everyone reverts back to plain text. With thing/geom I just pick and choose and tweak the pieces I need - and then swap them out when I want to change to something else entirely (mostly for building GUI applications in CLJFX)

bioemerl
I think more he was saying that types don't help you not break things, your preparation is worthless, you're not actually helping bugs, and when something does break your far slower at fixing it.

So not "move fast break things".

More "Don't over prepare, fix things fast."

The only thing I don't like about all these people talking about types is when you need to make a program change and that change has large impacts on other areas, like in a core API.

I do not want to make a change in a core API and have my program build without me addressing all of the places that would be affected by that.

This is where types shine, and this is where I think types are important.

If a language abandons types that is fine, the dynamic possibilities are amazing and I'd love to have them.

But I will not give up that ability to make a refactor in core code and then turn be able to address all of the places that that change breaks things.

Am I wrong in that?

Is my desire to do this just rooted in my own bad programming habits?

I'm curious if anyone has any thoughts.

jmartin2683
You’re not at all, and it’s one of the main reasons that I feel confident hopping into a codebase that I’m not as familiar with in something like Rust than say Ruby or JavaScript. The lack of strict types makes understanding how a program works very difficult, and being told this by the compiler or interpreter all but impossible.
joe-user
> I do not want to make a change in a core API and have my program build without me addressing all of the places that would be affected by that. > This is where types shine, and this is where I think types are important.

This is also where tests shine, which are far more expressive than the type systems we have today. Tests are usually not as convenient as types though, but it's another parameter to consider when choosing the right solution for a given scenario.

> But I will not give up that ability to make a refactor in core code and then turn be able to address all of the places that that change breaks things.

This will be less satisfying since it's anecdotal, but I'll offer up my experience anyhow: I rarely find myself refactoring. When I do refactor, it's almost always in the "changing the factoring" sense, in that callers are none the wiser to changes since the interface is the same, which limits the fear of breakage. That's not to say that it always turns out this way, but churning regularly on interface boundaries would be a "smell" to me.

To further beat the drum from above, I'd additionally expect the tests to help prevent breakage whether the program's dynamically or statically-typed. I review plenty of code, much of it in Scala, which puts a heavy emphasis on its strong typing. When there aren't tests, I request them or write them myself, and that uncovers bugs more often than not despite the programs passing the type checker.

helf
Ok, I want to read this. I hate watching videos of talks. But holy fuck, what the hell is with this transcription output?

I do not think

I could

read this post

like this because

it would

make me brain

hurt and also this is

the worst layout

I've ever seen from

even an automagic

transcription jeez

Arainach
Amen. If the timestamps were hotlinked to the video for reference, that might be clever, but the overall format is awful. For reference, this is what it looks like on my screen: https://i.imgur.com/0VZEldv.png
helf
Yep. Same.

But let's not complain any more. Apparently that is faux pas

cyberbanjo
https://pastebin.com/U8SWHWnG

document.querySelectorAll(".time").forEach(el => { el.style.visibility="hidden" })

paste into emacs text-mode buffer

M-x flush-lines ^$

(optionally set fill-column)

M-q (for fill-paragraph)

dang
"Please don't complain about tangential annoyances—things like article or website formats, name collisions, or back-button breakage. They're too common to be interesting."

https://news.ycombinator.com/newsguidelines.html

tomcam
My sincere compliments to anyone who managed to read past the first paragraph or two. And no, reader mode did not help.
fallat
Glad I'm not the only one who thought this was extremely difficult to read. There's too much horizontal eye movement.
systematical
unreadable.
groos
"There are only two kinds of languages: the ones people bitch about and the ones nobody uses" -- Bjarne Stroustrup, whom Jack disses in the video.
nigerianbrince
Rust
hayley-patton
Well, Bjarne did make C++.
beefman
Some interesting links from this talk:

Maria https://www.maria.cloud/

Glamorous Toolkit https://gtoolkit.com/

Data Rabbit https://datarabbit.com/

Nextjournal https://nextjournal.com/

Clerk https://github.com/nextjournal/clerk

Enso https://enso.org/

nudpiedo
I wonder how many of them are in use for people beyond its creators/contributors, and in case of having actual users and customers, who are those (not theoretical profiles, but existing ones).

This is an actual question, if someone has an answer I will gladly like to learn more about it.

0x445442
The enso link is interesting because there was another Enso project over a decade ago by Aza Raskin. That project, ironically, asserted the superiority of natural language and thereby text as a user interface.
d0mine
text is superior (as in programming languages, not natural language) in the general case but there may be exceptions GUI builders, mock-ups, diagrams, tricking non-programmers into programming (iOS shortcuts(yahoo pipes), Scratch/Alice).
Oct 12, 2022 · 32 points, 5 comments · submitted by austinbirch
leobg
So what language/environment can you use to write real world software and web services today that allow the kind of alive, exploratory programming described in this talk?

I know Jack gave some examples in his talk. But I'm interested in HN users who are currently using such systems, either for work or for side projects.

Thanks.

zhxshen
HN is written in Arc, which is a lisp derivative that runs on top of Racket, which has the same runtime inspectability as almost every other lisp, and for PG's public relations board, it was a good choice.

Rusher is full of s** though. It isn't really a case of, "this one was ahead of it's time, this one's behind". It is a case of trade-offs. I can dash off all kinds of cool stuff in Racket or Clojure, in record time w/ minimal lines of code. Good luck getting someone else to understand it though, and good luck getting me to understand it after a year away from it. With a larger code base, built-in compile-time type enforcements are huge--and as an afterthought, with macros, on an opt-in basis does not count. Aaron Swartz had Reddit re-written from lisp to python for similar reasons--not strict types, obviously, but having more syntax than parentheses is a big win when new hires are trying to understand the system.

Same story with APL: brilliant as a desk calculator on steroids, clumsy for nearly everything else. Or C, the indispensable bit-twiddling language; only a sadist would want to write an application in it, but it's still pretty darn indispensable for the tight spots where performance matters (drivers, AAA video games, AI maths, crypto, etc). I'd say that all the way up until the late 80s, all of the spots were "tight spots."

It's like the Bret Victor cult--a demo of a game-maker program that used javascript as its intermediate representation. It had been done hundreds of times before with different IRs, and the model is entirely appropriate for making 2D video games. Raise the dimensionality, or change the problem, and the model breaks down, but he conveniently ignored that, because it is more fun to get up on the podium and play programming prophet than to do the work of making a language that actually has legs.

keturakis
Right, I read your point that readability is as important to a language as other things.

But Rusher isn’t saying “use Clojure because it’s ahead of time as opposed to Rust/Go”. He’s making a point that some (often more niche) languages have built-in features that are better suited to what most of the development work is - debugging - and other languages should look to adopt them.

zhxshen
Not just readability; trade-offs in every conceivable dimension: time efficiency, memory efficiency, developer efficiency, readability, maintainability, correctness, etc, etc...

Adopting the features he values is not free. A toll would be paid along a different axis.

filbo
None
dusted
This is really good. Even if it kind of pokes my tender nerve for "good old days of computing" it really stimulates my nerve for "the bright magic future of computing"
Oct 11, 2022 · 15 points, 1 comments · submitted by brodo
sterlind
I get his point on type systems vs. live coding, but type checkers actually provide a form of dialog between programmer and compiler. You get a red squiggly line when you mess up! Even faster than hitting the REPL. It's like a reactive dataflow visualizing half of your program (the half that consists of type annotations.)

I think this is why I enjoy Haskell despite how annoying it is to debug. The compiler asks me questions, I know what's left to define. My dream would be some marriage of dependent types with automatically checked assertions, and a reactive, live coding environment.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.