# HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

### Hacker News Comments onBret Victor The Future of Programming

HN Theater has aggregated all Hacker News stories and comments that mention Joey Reid's video "Bret Victor The Future of Programming".
"The most dangerous thought you can have as a creative person is to think you know what you're doing."
Presented at Dropbox's DBX conference on July 9, 2013.
All of the slides are available at: http://worrydream.com/dbx/

For his recent DBX Conference talk, Victor took attendees back to the year 1973, donning the uniform of an IBM systems engineer of the times, delivering his presentation on an overhead projector. The '60s and early '70s were a fertile time for CS ideas, reminds Victor, but even more importantly, it was a time of unfettered thinking, unconstrained by programming dogma, authority, and tradition. 'The most dangerous thought that you can have as a creative person is to think that you know what you're doing,' explains Victor. 'Because once you think you know what you're doing you stop looking around for other ways of doing things and you stop being able to see other ways of doing things. You become blind.' He concludes, 'I think you have to say: "We don't know what programming is. We don't know what computing is. We don't even know what a computer is." And once you truly understand that, and once you truly believe that, then you're free, and you can think anything.'"
HN Theater Rankings

#### Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.

But there was one earlier presentation I cannot find it, guy was showing live debugging of the video game. Not sure is it TED or one of conferences ...

Edit:

This is it:

Bred Victor: https://youtu.be/EGqwXt90ZqA?t=1006

mkl
*Bret Victor: http://worrydream.com/

Many past threads on HN: https://hn.algolia.com/?q=worrydream

I think you might be ok on compute but bottleneck on bandwidth. Who knows though. Fun question.

If you like exploring these kinds of ideas you might enjoy https://youtu.be/8pTEmbeENF4

Jul 15, 2021 · melling on Pharo 9
“Pharo is a … and immediate feedback.”

The key that we should provide more often.

Bret Victor has been discussing for over a decade. Can’t find them demo that i’m thinking of but here’s an introduction into Bret

https://youtu.be/ef2jpjTEB5U

https://youtu.be/8pTEmbeENF4

His ideas go beyond “immediate feedback” …

Jul 14, 2021 · 1 points, 0 comments · submitted by funkaster
The problem is we stop pursuing answers on this topic, thus stop making progress. It's basically like what Bret Victor has described in 'The Future of Programming'. [0]

There were a lot of language zealots at the end of the last century, especially on evangelizing Object-Oriented Programming. Nowadays everybody can easily counter those arguments with 'No Silver Bullet' without further thinking, it's arrogance in disguise of humility. There are still a huge amount of accidental complexities to deal with in most tech stacks. Most businesses would die fast and leave nothing behind anyway, while the progression of the industry would accumulate and benefit the whole industry itself.

Java looks slightly better for creating software at scale than C. C looks slightly better than FORTRAN. FORTRAN looks slightly better than machine code. Say there's a language that looks like Haskell but has tooling and ecosystems as good as Java, I believe it would also slightly better than Java.

Bret Victor has an inspiring talk on this theme: https://www.youtube.com/watch?v=8pTEmbeENF4
Bret Victor's, "The Future of Programming" is illuminating. He walks through what "programming" means and how the concept of "programming" has shifted in little evolutionary leaps.

"There can be a lot of resistance to new ways of working that require you to unlearn what you've already learned and think in new ways. ... And there can even be outright hostility."

Programming Baduk used to involve expert systems. Now convolutional neural networks (CNNs) can hoist a computer to superhuman performance, even doing so without pre-programmed rules (see MuZero). We no longer "program" computers to play Go, chess, shogi, or even Atari games.

Some people have difficulty keeping code structures in their mind's eye. Here's a conceptual development environment for navigating code visually, ending with a dual text editor:

Is that programming?

What's the difference between _typing_ instructions into a computer to place a graphical user interface widget on a screen and _telling_ the computer you'd like to put a toroid on the screen? Computers can use CNNs to fill in knowledge gaps. Even though the computer wasn't told the colour, size, shading, material, or location of the toroid, it can still show us the ring.

Is telling a holodeck that you'd like to replay a scene from a novel a form of programming?

Each evolutionary step in programming has given us more powerful ways to express ideas in ever terser forms. Few people code in binary anymore. Did Picard need to tell the computer where to put every chair, table, glass, and machine gun?

What is programming?

Lots of cynic-cynics in here :) I'll stand up for the author. The lists in the article are not great, but I still agree with the sentiment.

I recommend everyone watch Bret Victor's classic "The Future of Programming" https://www.youtube.com/watch?v=8pTEmbeENF4

Yes, we've had a trillion dollars invested in "How to run database servers at scale". And, we've had some incremental improvements to the C++ish language ecosystem. We've effectively replaced Perl with Python. That's nice. Deep Learning has been a major invention. Probably the most revolutionary I can think of in the past couple decades.

But, what do I do in Visual Studio 2019 that is fundamentally different than what I was doing in Borland Turbo Pascal back on my old 286? C++20 is more powerful. Intellisense is nice. Edit-and-continue worked 20 years ago and most programmers still don't use it. If you are super awesome, you might use a reversible debugger. That's still fringe science these days.

There is glacial momentum in the programming community. A lot of "grep and loose, flat ASCII files were good enough for my grandpappy. I can't accept anything different" And, so we don't have code-as-database. A lot of "I grew up learning how to parse curly bracket blocks. So, I can't accept anything different". So, so many languages try to look like C and are mostly flavors of the same procedural-OO-bit-of-functional paradigm. A lot of "GDB is terrible, don't even try" so many programmers are in reverse-stockholm system where they have convinced themselves debuggers are unnecessary and debugging is just fundamentally slow and painful. So, we don't have in-process information flow visualization. And, so on.

I agree.

I also agree with the rough time delineation. Starting with the dotcom bubble, the industry was flooded with people. So we should have seen amazing progress in every direction.

Most of those programmers were non-geeks interested in making an easy buck instead of geeks, into computers and happily shocked that we could make a living at it. And many of the desirable jobs turned out to be making people to click on things to drive revenue.

Who can blame any of those people? They were just chasing the incentives presented to them.

Check out the Unison programming language (https://www.unisonweb.org/). The codebase exists as a database instead of raw text. It has the clever ideas of having code be content, and be immutable. From these 2 properties, most aspects of programming, version control, and deployment can be re-thought. I've been following its development for a few years, I can't wait for it to blossom more!
What problem does this solves?
> Edit-and-continue worked 20 years ago and most programmers still don't use it. If you are super awesome, you might use a reversible debugger. That's still fringe science these days.

Or use a debugger at all. Or write their code in a way that's easy to debug.

> "grep and loose, flat ASCII files were good enough for my grandpappy. I can't accept anything different"

Just try sneaking an unicode character in a non-trivial project somewhere.

> So, so many languages try to look like C and are mostly flavors of the same procedural-OO-bit-of-functional paradigm

C did something right. It's still readable and simple enough it doesn't take too long to learn (memory management is the hardest thing about it).

> There is glacial momentum in the programming community. A lot of "grep and loose, flat ASCII files were good enough for my grandpappy. I can't accept anything different" And, so we don't have code-as-database. A lot of "I grew up learning how to parse curly bracket blocks. So, I can't accept anything different". So, so many languages try to look like C and are mostly flavors of the same procedural-OO-bit-of-functional paradigm.

We actually have tried several times to build programming languages that break out of the textual programming style that we use. Visual programming languages exist, and there's a massive list of them on Wikipedia. However, they don't appear to actually be superior to regular old textual programming languages.

“Superior” is meaningless out of context. There are domains where visual programming prevails, like shader design in computer graphics. Visual programming is a spectrum: on one end you’re trading the raw power of textual languages for a visual abstraction, and on the other end you just have GUI apps. UI design and prototyping programs like Sketch are arguably visual programming environments, and you’d have a hard time convincing me that working in text would be more efficient.
>We actually have tried several times to build programming languages that break out of the textual programming style that we use. Visual programming languages exist, and there's a massive list of them on Wikipedia. However, they don't appear to actually be superior to regular old textual programming languages.

A lot of the time I spent doing the Advent of Code last month was wishing I could just highlight a chunk of text and tell the computer "this is an X"... instead of typing out all the syntactic sugar.

Now, there is nothing that this approach could do that you can't do typing things out... except for the impedance mismatch between all that text, and my brain wanting to get something done. If you look at it terms of expressiveness, there is nothing to gain... but if you consider speed, ease of use, and avoiding errors, there might be an order of magnitude improvement in productivity.

Yet... in the end, it could always translate the visual markup back to syntactic sugar.

Low code environments as shipped today are actually quite impressive, and I'm saying that as a very long term skeptic about that field. This time around they're here to stay.
I've come to the opinion that "graphical" vs "non-graphical" is a red herring. I don't think it actually matters much when it comes to mainstream adoption. Is Excel graphical? I mean, partly, and partly not, but it's the closest we've gotten to a "programming language for the masses". Next up would probably be Visual Basic, which isn't graphical at all. Bash is arguably in the vicinity too, and again, not graphical.

Here's my theory (train of thought here); the key traits of a successful mainstream programming solution are:

1) A simple conceptual model. Syntax errors are a barrier but a small one, and one that fades with just a little bit of practice. You can also overlay a graphical environment on a text-based language fairly easily. The real barrier, IMO, is concepts. Even today's "more accessible" languages require you to learn not only variables and statements and procedures, but functions with arguments and return values, call stacks, objects and classes and arrays (oh my!). And that's just to get in the door. To be productive you then have to learn APIs, and tooling, and frameworks, and patterns, etc. Excel has variables and functions (sort of), but that's all you really need to get going. Bash builds on the basic concepts of files, and text piping from one thing to another.

2) Ease of interfacing with things people care about: making GUIs, making HTTP requests, working with files, etc. Regular people's programs aren't about domain modeling or doing complex computations. They're mostly about hooking up IO between different things, sending commands to a person's digital life. Bash and Visual Basic were fantastic at this. It's trickier today because most of what people care about exists in the form of services that have (or lack) web APIs, but it's not insurmountable.

I think iOS Shortcuts is actually an extremely compelling low-key exploration of this space. They're off to a slow start for a number of reasons, but I think they're on exactly the right track.

You're missing that the programming environment needs to be scalable to 50 or even 500 collaborators. Arguably bash and excel struggle at scaling non trivial problems to 15 collaborators. A surprising number of programming environments do even worse, notably visual or pseudo-visual ones, but even some textual ones.
I don't think it needs to, since none of the above do, but that would definitely help at least in the enterprise
Bret Victor has captured this really well.
Dynamicland is the best form of 'AR' right now. Apple is taking a note via App Clips and the physical version of that - whatever they call it.
Sep 01, 2020 · 3 points, 0 comments · submitted by rbanffy
Aug 01, 2020 · 12 points, 2 comments · submitted by r2b2
sxp
This needs either a (2013) or (1973) tag in the title. It's a good video and worth watching like many other Bret Victor videos.
He probably wasn’t born in 1973.
PeerJ has actually quite a few publications, one of them is CS ;) https://peerj.com/computer-science/ the dropdown on the top left allows you to switch between them.

I think MathJax is certainly a step in the right direction, they even support rendering to MathML.

But I agree that there is a certain lack there in terms of full semantic representations. MathJax is more accessible than TeX but it's still describing visual layout, instead of semantic meaning.

Pushing HTML to arxiv is also a step into the right direction.

I think the most important thing we can do is not be complacent with the state of the art. We need to go back to an age of computing where we didn't think we had it all figured out. We need to experiment, and not be afraid to take a step back in some aspects, like layout and kerning, in exchange for other advances like semantic representations and knowledge representation.

I think bred victor has a great talk on this: https://www.youtube.com/watch?v=8pTEmbeENF4

I think we need to experiment with things like observablehq.com or nextjournal.com or the many other that are coming into existence.

re: PeerJ: I missed that, nice!

re: semantics vs visual layout of math... Wikipedia says OpenMath is a thing, but... that only solves half the problem. Once you have a format that encodes what you want, someone has to actually it.

Like, if some writes x^{-1} and f^{-1}, it's hard for a computer to figure out that the first one means "the number you get when you divide 1 by x", whereas the second one means "the function you get when you compute the inverse of f".

And if the author can't be bothered to slow down and say which is which, then the reader will have to guess.

re: kerning: TeX's advantage here is not fundamental, I think. Just need a good font, as far as I know. (Actually that's not far; I know almost nothing here.)

re: layout: CSS is finally getting good at this from what I hear.

re: talk: looks familiar; maybe I should re-watch it.

>> They continuously run every time you make any change to them.

>Which is very much unlike what a program does.

You're saying "Program that run continuously every time you make any change are very much unlike what a program does?" That doesn't make any sense to me at all, can you please try to rephrase it?

Speaking of program that run continuously, have you ever seen Bret Victor's talks "The Future of Programming" and "Inventing on Principle", or heard of Doug Engelbart's work?

The Future of Programming

Inventing on Principle

HN discussion:

https://news.ycombinator.com/item?id=16315328

"I'm totally confident that in 40 years we won't be writing code in text files. We've been shown the way [by Doug Engelbart NLS, Grail, Smalltalk, and Plato]." -Bret Victor

Do you still maintain that "Excel sheets in their widely used form are not instructions or behaviour", despite the examples and citation I gave you? If so, I'm pretty sure we're not talking about the same Microsoft Excel, or even using the same Wikipedia.

Your definition is arbitrarily gerrymandered because you're trying to drag the editor into the definition of the language, while I'm talking about the representation and structure of the language itself, which defines the language, not the tools you use to edit it, which don't define the language.

I'll repeat what I already wrote, defining how you can distinguish a non-visual text programming language like C++ from a visual programming language like a spreadsheet or Max/MSP by the number of dimensions and structure of its syntax:

>But the actual structure and syntax of a C++ program that you edit in VI is simply a one-dimensional stream of characters, not a two-dimensional grid of interconnected objects, values, graphical attributes, and formulas, with relative and absolute two-dimensional references, like a spreadsheet.

Text programming languages are one-dimensional streams of characters.

Visual programming languages are two-dimensional and graph structured instead of sequential (or possibly 3d, but that makes them much harder to use and visualize).

The fact that you can serialize the graph representation of a visual programming language into a one-dimensional array of bytes to save it to a file does not make it a text programming language.

The fact that you can edit the one-dimensional stream of characters that represents a textual programming language in a visual editor does not make it a visual programming language.

Microsoft Visual Studio doesn't magically transform C++ into a visual programming language.

PSIBER is an interactive visual user interface to a graphical PostScript programming environment that I wrote years after the textual PostScript language was designed at Adobe and defined in the Red Book, but it didn't magically retroactively transform PostScript into a visual language, it just implemented a visual graphical user interface to the textual PostScript programming language, much like Visual Studio implements a visual interface to C++, which remains a one-dimensional textual language. And the fact that PostScript is a graphical language that can draw on the screen or paper doesn't necessarily make it a visual programming language.

https://medium.com/@donhopkins/the-shape-of-psiber-space-oct...

It's all about the representation and syntax of the language itself, not what you use it for, or how you edit it.

Do you have a better definition, that doesn't misclassify C++ or PostScript or Excel or Max/MSP?

lmm
> You're saying "Program that run continuously every time you make any change are very much unlike what a program does?" That doesn't make any sense to me at all, can you please try to rephrase it?

Running continuously every time you make any change is very much unlike what a program does. Programming is characteristically about controlling the sequencing of instructions/behaviour, and someone editing a spreadsheet in the conventional (non-macro) way is not doing that.

> Do you still maintain that "Excel sheets in their widely used form are not instructions or behaviour", despite the examples and citation I gave you? If so, I'm pretty sure we're not talking about the same Microsoft Excel, or even using the same Wikipedia.

This is thoroughly dishonest of you. You edited those points and examples into your comment, there was no mention of macros or "programming by demonstration" at the point when I hit reply.

To respond to those added arguments now: I suspect those features are substantially less popular than Ruby. Your own source states that Microsoft themselves discourage the use of the things you're talking about. Excel is popular and it may be possible to write programs in it, but writing programs in it is not popular and the popular uses of Excel are not programs. Magic: The Gathering is extremely popular and famously Turing-complete, but it would be a mistake to see that as evidence for the viability of a card-based programming paradigm.

> Your definition is arbitrarily gerrymandered because you're trying to drag the editor into the definition of the language, while I'm talking about the representation and structure of the language itself, which defines the language, not the tools you use to edit it, which don't define the language.

Anything "visual" is necessarily going to be about how the human interacts with the language, because vision is something that humans have and computers don't (unless you're talking about a language for implementing computer vision or something).

> I'll repeat what I already wrote, defining how you can distinguish a non-visual text programming language like C++ from a visual programming language like a spreadsheet or Max/MSP by the number of dimensions and structure of its syntax:

But you can't objectively define whether a given syntactic construct is higher-dimensional or not. Plenty of languages have constructs that describe two- or more-dimensional spaces - e.g. object inheritance graphs, effect systems. Whether we consider these languages to be visual or not always comes down to how programmers typically interact with them.

> PSIBER is an interactive visual user interface to a graphical PostScript programming environment that I wrote years after the textual PostScript language was designed at Adobe and defined in the Red Book, but it didn't magically retroactively transform PostScript into a visual language

There's nothing magical about new tools changing what kind of language a given language is. Lisp was a theoretical language for reasoning about computation until someone implemented an interpreter for it and turned it into a programming language.

> Lisp was a theoretical language for reasoning about computation until someone implemented an interpreter for it and turned it into a programming language.

Lisp was designed and developed as a real programming language. That it was a theoretical language first is wrong.

Related thread: Ask HN: What's the best book on the early history of the Internet and/or Web?

https://news.ycombinator.com/item?id=19556208

My previous reco: Not a book, but a great video via Steve Blank: https://www.youtube.com/watch?v=ZTC_RxWN_xo

Also Bret Victor The Future of Programming, which is misleading as above is performance piece where title slide reads 1973 https://www.youtube.com/watch?v=8pTEmbeENF4

Feb 03, 2020 · 3 points, 0 comments · submitted by szx
> I see Bret Victor more as a historian where he finds old ideas and re-introduces them to people who haven’t seen them before.

Just yesterday I revisited his Future of Programming talk. Splendid!

Nov 20, 2018 · 2 points, 0 comments · submitted by fmoronzirfas
Oct 29, 2018 · 2 points, 0 comments · submitted by gyre007
'The future of Programming' by Bret Victor (https://www.youtube.com/watch?v=8pTEmbeENF4). Seems a pun about OP title but it really is related with his/her question. Take a look and you will be amazed of how good (or revolutionary by our standards) some old technologies were.
> How would that better environment look?

Come on, what kind of question is that? If I knew how to improve it I wouldn't be chatting here with you, I would be doing something about it.

Also, you should probably watch Bret Victor's videos, especially "The Future of Programming", if only to realize that we have been improving the programming environment since the days of punch cards, and are still in the process of doing so.

Also, pretty much anything Bret Victor has done.
Good read. I feel the title should be "Teaching Programming Paradigms and Beyond" since the text assumes familiarity with complete CS landscape and comments on success of several teaching methods.

I would recommend (also misleadingly titled) talk The Future of Programming by Bret Victor [1] which goes over some groundbreaking paradigms that have since become mostly forgotten.

It's in a Handbook of Computing Education, so everything in the book is about "teaching". It would therefore have been especially odd to put that in the title.
Jun 05, 2018 · 1 points, 0 comments · submitted by aziis98
May 23, 2018 · 5 points, 1 comments · submitted by nmat
Bret Victor is _so incredibly inspiring_.

So much amazing work:

Learnable Programming: http://worrydream.com/#!/LearnableProgramming

Inventing On Principle: http://worrydream.com/#!/InventingOnPrinciple

The Future Of Programming: http://worrydream.com/#!/TheFutureOfProgramming

Edit: formatting

Yes, healthy skepticism is good, but just because code (i.e. text files) is often the most _powerful_ or _flexible_ tool, doesn't mean it's always the best tool.

We (programmers) are notoriously bad at advancing the tools in our field.

For a brief history of this, watch "The Future of Programming" talk by Bret Victor:

That guy gets an a+ for presentation but I couldn't find much to agree with him on.

He talks about code being linear lines of text as though that's a bad thing. We've pretty much been stuck with this as state of the art in our writing systems for thousands of years, what would be your reaction if I suggested everyone should watch videos instead of read books? It's a flexible and easy way to represent a program that no other tool has come close to.

> We (programmers) are notoriously bad at advancing the tools in our field.

We've been trying to automate ourselves out of jobs for the entirety of the history of the industry yet programmers are in more demand than ever. Everyone wants to work on interesting problems and creating inner platforms is far more interesting than writing boring business logic. Yet for all our efforts we've barely progressed since the 70's, why do you think that is?

> What would be your reaction if I suggested everyone should watch videos instead of read books?

Videos are just another useful tool for learning; they don't obviate the need for books, but they're better at conveying some ideas/information than books alone.

Just like videos and books aren't mutually-exclusive tools for learning, graphical tools and textfiles aren't mutually-exclusive tools for building programs.

>We (programmers) are notoriously bad at advancing the tools in our field.

I think the tools of our trade have advanced tremendously. Visual Studio for example is an amazing experience for C# programmers, one that most languages don't have. And this is in text tools.

Programmers know that text is the most powerful and flexible, which is why we advance those tools that help in working with text.

GUI tools are good for people who only want to do something every once in a while. Something they don't need to repeat. Where there's a simple recipe for it. And, yes, programmers don't do that much to advance these, because they have no use for them themselves.

Bret Victor - The Future of Programming (imagined from perspective of 1970's)

This talk is obscenely underrated. There is not nearly as much tech-focused performance art in our industry.
Underrated by who? (“Whom”?) It always shows up in these lists, and rightly so.
> Makes you wonder what pioneers back in the 60s and 70s could have accomplished with modern hardware.

One of the best Bret Victor talks is about that.

"The Future of Programming"

> And it kills me to think that Smalltalk and Visual Basic had a built-in GUI editor and layout manager, unlike the web.

Anyone of us doing desktop/mobile development on .NET, Java, Android, iOS, Qt development can still enjoy such goodies.

With web, one day it might catch 90's RAD tooling.

I used the 1990s RAD tooling, and in many regards, the results were mediocre at best. Changing a window size could kill your form, to say nothing of font size.

The live aspect was great, though.

But you would expect 20 years to be enough time to improve RAD tooling in areas it wasn't so great at for modern devices. Instead, at least as far as the web is concerned, it's mostly been missing.
90's RAD tooling also supported layout managers, devs just have to actually use them.

Also RAD tooling is about the whole stack, not just dragging stuff into forms.

One of the reasons it has never taken off is that GUI editors and layout managers that have come out for the web (and there have been a few), have never quite gotten the code right. They would produce a page that looked like the designer... but with terribly written HTML/CSS. So web designers and developers prefer to make their own markup.
I seldom recognize pieces of HTML/CSS literature when looking at the developer tools panel, or source code from well known frameworks.
The deeper problem is that the tech industry is slow to get rid of HTML, even though HTML was created to exchange documents, and nowadays we mostly use it as a GUI for network software. See "The Problem With HTML":
It used to be conventional wisdom that using HTML as a layout system is wrong, you know. Because it wasn't meant to be one, and text content was supposed to be independent of the medium on which it is viewed. Well, at the time I had my doubts that it was going to work, but it was (and still is) a nice idea.
lmm
HTML isn't the problem - declarative markup is a great way of doing GUI layout, non-web GUI frameworks tend to come up with alternatives that look similar. The problem is CSS, which is a fractal of bad design, broken at every level, from selectors to the box model.
That I agree with.

Android, iOS, XAML and QML are quite nice to work with.

CSS works fine for Text markup. The problem is you get two models one where the Browser picks where stuff goes depending on the browser and local settings, another where the designer makes that choice. You can't have both things exist at the same time, on top of that most designers don't know what they are doing.
Mar 15, 2018 · 1 points, 0 comments · submitted by swyx
Programming was in its infancy 50 years ago, but in reality we don't really know what its development arc is. We could be in the toddler stage right now, or we could still be in infancy when compared to future developments in the field. I believe we are much closer to the latter.

There were things being done in the 60s that we still haven't really integrated into our trade. [0] We joke about having to program with hardware switches and punch cards, yet here we are still typing carefully crafted cryptic commands that tell the computer exactly what it is supposed to do, and storing them in linear text files which we have to mentally map to program states. I think there will always be a place for this kind of programming, just as people still use Assembly today, but it's a bit premature to say, "Well, this is it, or nearly so!"

I recall reading that one of the giants of early computing, Von Neumann perhaps, never understood the benefit of Assembly and thought that it was a waste of the computer's time to compile to machine code rather than have a human write the machine code directly. We are working inside a problem domain that we barely understand. I find it hard to believe that we will have the glorious sci-fi future that many of us imagine will come out of advancements in technology without also developing corresponding advancements in how we describe and create and interact with it. One potential example that I am looking forward to learning more about is Luna, which features a visual development environment that is isomorphic to its code. [1]

The implicit goal of programming language and tool development is "How do I make it easier to accurately map 'the thing I want done' into a functioning system?" And our tools are getting better all the time, opening up new avenues of interest and possibility. This is a great time to be a programmer, and I think it's only going to get better, and become more accessible.

> in reality we don't really know what its development arc is

That's true, but we know we're past the rapid advancement portion of the arc. Look at the most widely used languages today, the top ten, regardless of methodology[1][2][3], are dominated by languages that are ~20 to 30+ years old.

> advancements in how we describe and create and interact with it

As a funny, but accurate, CommitStrip[4] pointed out, you'll need to create a specification, and we already have a term for a project specification that is comprehensive and precise enough to generate a program...it's called code.

> One potential example that I am looking forward to learning more about is Luna

That was discussed on HN recently[5] and some people were pointing out it appeared to have made little to no progress since the previous time it was submitted and others mentioned various short-comings of these types of visual programming languages in general. Time will tell, we'll see what happens /shrug

I'm in love with this, I have nothing constructive to add but you should be really proud of this work. Reminded me of this video: https://www.youtube.com/watch?v=8pTEmbeENF4

Think you're on to something that this talk points out very well.

That video reminds me of a comic I sketched out years ago. It's about this rag tag armed group of hackers who specialize in intelligence espionage in the near future. There's a payload specialist, security specialist, data structure specialist, etc. Much of the drama unfolds in VR space where the hackers can be seen frantically querying/manipulating data structures directly by hand whilst evading detection. It's supposed to be educational as well, explaining CS topics through stories. Think the movie hackers plus GitS, but with attention to accurate portrayal of CS knowledge. It's basically my vision of what the future could be like.
Thanks! That's funny, before starting at a programming job a few years ago my soon-to-be-employer (at a tiny YC startup) said to me, "you can be like our own little Bret Victor!" I should re-watch that one though since I only have a vague recollection of it. I enjoyed his "Inventing on Principle" talk quite a bit.
Extremely relevant, particularly his remarks: https://youtu.be/8pTEmbeENF4?t=1174 (19:34 if t= doesn't work).
May 23, 2017 · 2 points, 0 comments · submitted by feargswalsh92
May 18, 2017 · comboy on Kotlin Is Better
Oh Delphi.. every time I fight with CSS and think about how easy making GUI apps used to be almost 20(sic!) years ago, I feel like something went wrong.
I had to do maintenance on an old winforms app recently, it's insane how simple it is to develop with, how quickly it starts up and how quickly it show users the data they want. I signed myself up as the project maintainer.

And even that is an incredibly bloated technology compared to delphi.

I've been writing web apps for 20ish years, and also still can't see productivity catching up to what we were doing with Delphi and other desktop uis 25 years earlier... not even with react and all these other webpack/babel heavy things.

Web development...

Haven't looked at Delphi since last century, but it seems it's still alive, and can produce iOS and Android (and all the desktops) programs. No idea how well though...
It's rubbish. Worst IDE I've ever worked with, sometimes I consider just using Notepad. No day without crashes, random errors, intelliSense not working, debugger suddenly not showing variable values, code navigation not working...

Embarcadero is just milking companies that need Delphi for legacy code.

Borland management went wrong. 20 years...
For those who want to see how Delphi GUI design is: https://www.youtube.com/watch?v=BRMo5JSA9rw
bitL
There's always Lazarus...
yep. yep. yep.
So did Delphi handle dynamically resizing and positioning layouts? My impression is all the old highly productive gui languages (Delphi, Visual Basic, ...) used absolute positioning. Personally I'd rather have a more complex gui framework (css, swing, wpf) that handles positioning than to be forever cursed tweaking pixel width, height, x, y values.
bitL
Yes, if you set anchors on each component you wanted to be resizable (more specifically, all four sides could have had an independent anchor).
Same with winforms. I'm not sure if it was added at some point or always there, if it was always there I wish I knew about it a lot sooner.
>... than to be forever cursed tweaking pixel width, height, x, y values.

Inevitably, this is what CSS work devolves to, though.[0]

[0] Pixel twiddling

It has anchor layout, flow layout, table layout, etc... I have typically found it easier to do the layout I wanted in delphi than CSS. The only annoying thing is that the visual designer has no undo, which you really miss when doing exploratory designs.

CSS 2 really is terrible. There's a few things in CSS 3 that make it passable, but overall I consider it a failed layout system which needs more workarounds than it provides solutions.

This reminds me a lot of Bret Victor's "The Future of Programming".
The improved process is not in how it can emulate the way you've done things for decades with file-based tools. It's in the conceptual nature of programming, as well-explained by Bret Victor: https://youtu.be/8pTEmbeENF4

- Software development using files and folders is absolutely antediluvian. Smalltalk does everything in a universe of objects; source code is organized spatially rather than in long reams residing in text files.

- Live coding and debugging done right is an enormous and unparalleled productivity booster.

- Persisting execution state is extremely convenient for maintaining continuity, and hence the high velocity of development.

Governments and enterprises have long used Smalltalk to write massive applications in large team environments using Smalltalk's own tooling for collaboration and version control. There's no question that historically Smalltalk has not played well with the outside (file-based) world, and this has been a major point of contention.

If you insist on using Git and similar tools with Smalltalk, then yes, this is problematic. The point is, if you view software development from only one perspective, you deny any possibility of improving the process in other ways that can lead to dramatically improved productivity, accelerated development, and lowered cognitive stress.

Sorry, I am at work right now and don't have time to watch videos. Can you tell me more about "Smalltalk's own tooling for collaboration and version control"? Are you referring to Monticello? I am not insisting on git, but Monticello seems pretty limited in term of collaboration. I see commit, diff, checkout, and remote pull/push.

Specifically, let's imagine this scenario: we have team of tens of programmers working on a project. A new team member joins and accidentally breaks the code in non-obvious way. He pushes the code to main repository. Next time, everyone else checks out the latest version of the code and starts having weird problems. If you had 20 people on team, and they each wasted 2 hours because the code was broken, well, you just wasted a week of programmer time. How do you prevent it?

In file-based word, the answer is tests and CI. What is the smalltalk way? And please do not say "It's in the conceptual nature of programming" -- if the scenario makes no sense in the smalltalk world (maybe you are not supposed to have 20 people working on the same project?) please say this.

A few important points:

1. Breaking code is nothing specific to a language. The usual weapon also in Smalltalk is to monitor if something breaks - for instance by using CI. Continuous integration only makes real sense when you have tests.

    One should remember that "test first", "unit testing" and "Extreme programming" (XP)
like many other things had their roots in Smalltalk. Because in dynamic
languages testing using code was and is part of the culture (ranging from lively verifying
with workspace expressions and inspectors up to fully written tests).

The first unit testing framework "SUnit" was written by Kent Beck for
Smalltalk (SUnit), later he ported it to Java with Erich Gamma on a flight
to OOPSLA OO conference. Java helped to push the popularity afterwards.

Meanwhile also static language enthusiasts have understood that it is better to
rely on tests than type checking compilers and they now hurry up to follow

One last thing you should try: try to query your system how many test methods were
written. When you solved this easily with a Smalltalk expression retry
this in Java ;)

2. Commercial Smalltalks which are often used in big projects provide solutions
which are repository based like the famous ENVY (written by OTI, was
in VisualAge for Smalltalk from IBM, now VAST) or Store (VisualWorks). For
more details try the commercial evaluation versions or read [1] or [2].
A screenshot of Envy can be seen in [3]. I worked with ENVY and it is really
good - but mostly only for internal work/teams. If I remember correctly
ENVY once was also available for VisualWorks (VW) ... but later got replaced

Cincom developed Store for VW as a replacement which is also nice as it allows to work
in an occasionally-connected mode, so work offline and push packages/versions later to
a central team repo.

In the open source world there are different solutions (including Monticello
which is available for nearly all Smalltalk derivates) or newer solutions
like FileTree or Iceberg allowing to work with Git.

The workflow depends on the tool and your requirements.

3. Often it makes sense to automatically build and regular distribute a fresh daily
developer images to the members of your team. This helps in later merging
code.
For instance Kapital (a big financial project from JP Morgan) works that
way and I've seen that model very often. See [4]

Again nothing special to Smalltalk. In more file based languages it also makes
sense to stay close to the main line and merge as well as resynchronize with the
team.

In Pharo for instance we have the PharoLauncher that allows you to download
any (fresh or old) image built provided by the open source community.

3. Versioning can be done on many levels. Simplest level is the image itself.
Smalltalk not only has an VM and image concept - but also the concept of a changes
file. If you evaluate a code expression, create or modify a class or method
in the system this gets logged there.
It prevents you from loosing code and it is easy to restore quickly for instance
an earlier method versions/editions that one has implemented.

Most Smalltalks now also work with packages and you can define package
dependencies as well as declaring versions that fit together to provide
a project, goodie or app (for instance with a Configuration class in Monticello)

While in file based languages this is often done in an XML file (Maven for instance)
or a JSON file in Smalltalk this is usually expressed with objects and classes again.
This also makes it more flexible as you can very easily do queries on it or
use refactoring tools to even restructure or reconfigure.

4. Usage of shared code repositories is very common also in Smalltalk. While you
now can also use GitHub, GitLab, Gogs and others with Iceberg and friends in Smalltalk
there are also repository systems implemented in Smalltalk itself like
- SqueakSource (http://source.squeak.org, http://squeaksource.com)
- SqueakSource3 (http://ss3.gemtalksystems.com)
- SmalltalkHub (http://smalltalkhub.com)

5. Beside repositories where code and goodies are hosted one often finds registries

Pharo for instance has http://catalog.pharo.org which is accessible also
directly from the image.

5. If you work in a team you can also use a custom update stream. This is how
for instance open source projects like Pharo and Squeak are managed.
So anyone can hit an "update" button to get the latest changes.

In Pharo http://updates.pharo.org is used and you can have a look at UpdateStreamer
class to see how easy that works over the web or how to customize it for own needs.

7. If one requires not only collaboration for the development team (coding) but
would like to collaborate also with other projects members on other artefacts
(Excel, project plans, documents, ...) then one should have a look at tools like this

http://www.3dicc.com

which is implemented in - guess what: SMALLTALK.

This list could be endless ... the first few points should only give a glimpse on what is there and available.
Pharo Smalltalk, in particular, when it comes to VCS, it's very similar to git actually. It uses source code files, it distributes them via zip files, it works locally instead of centralized, it supports merges, etc.

Pharo works well also with usual VCS because it can export code into source code files.

The image plays no role in VCS whatsoever because VCS is about code, not data, and image is mostly about live data and less about live code.

So any tool will and does work with Pharo outside the image. Problem arises with a majority of people that prefer to stay in the image; in that case you gain more control because you have more Pharo code to play with, but you lose a lot of power because we are a small community not able to compete with behemoth projects like git.

Another interesting thing which Pharo does emphasize is remote debugging: though not a Pharo monopoly by a long shot, we do have several libraries that can achieve this, and because the image format retains live state and live code execution, it's easy to resolve issues.

Besides the image format, the Fuel format has the advantage of storing only a fraction of the image. You can email this or share it via git or Dropbox. Like an image, a Fuel file is a binary file and, like the image, it can store live state and live code execution. This way, you can isolate live bugs and share them with your team, each one in its own Fuel file.

STON is also another alternative format which feels familiar for those that have worked with JSON.

So you see, you get the best of both worlds. You have the fantastic Smalltalk image, and you have ways to deal with the file-based world.

Bret Victor is amazing. I saw his video about the future of programming[0] and have been following him since then.
Always liked Bret Victor's talks - they are quite popular amongst HN crowd I think.
Bret Victor The Future of Programming: https://www.youtube.com/watch?v=8pTEmbeENF4
Another good example is the NeXT vs Sun duel, regarding the RAD tooling of NeXTStep vs the traditional UNIX development (1991).
Aug 10, 2016 · 1 points, 0 comments · submitted by tylermauthe
Apple has Swift Playgrounds as someone described.

Visual Studio has edit-and-continue, interactive REPL, visualizers, some sort of backtracking and now from Xamarin Interactive Workbooks.

Then you have the whole INotebook trend which started out with Python and nowadays supports multiple languages.

However all these tools are actually catching up with many of the features that Smalltalk-80, Interlip-D, Mesa/Cedar, Lisp Machines, Oberon already had.

This is what Bret Victor jokes about in another presentation of him, where he pretends we are in the 70's making predictions how the world of computers will look like in the 21st century.

Those experiences are not provided in smalltalk and Lisp machine, they are just a bit more interactive than others. In fact, playgrounds are inspired by inventing on principle and come closest to realizing just one of the demos, while the designs were obviously not around in the early 80s to guide anything.

You can definitely do "something" in smalltalk, lisp machine, but the experience was always hazily defined, and anyways, is quite niche. So as you resort to in your post, people tend to list technical features other than describe experiences, which, even after many of Bret's essays, still need further development before they are realized in production systems (we still don't really know what we want to do, especially what will scale beyond say playgrounds). Experience first, features that can realize it second.

I immediately thought of Bret Victor's wonderful talk on, 'The Future of Programming'
Jul 22, 2015 · 1 points, 0 comments · submitted by eddd
Jul 17, 2014 · 3 points, 2 comments · submitted by thoughtsimple
The conclusion is profound in my opinion. The rest is just a clever way of making the point.
From the description of the video:

For his recent DBX Conference talk, Victor took attendees back to the year 1973, donning the uniform of an IBM systems engineer of the times, delivering his presentation on an overhead projector. The '60s and early '70s were a fertile time for CS ideas, reminds Victor, but even more importantly, it was a time of unfettered thinking, unconstrained by programming dogma, authority, and tradition. 'The most dangerous thought that you can have as a creative person is to think that you know what you're doing,' explains Victor. 'Because once you think you know what you're doing you stop looking around for other ways of doing things and you stop being able to see other ways of doing things. You become blind.' He concludes, 'I think you have to say: "We don't know what programming is. We don't know what computing is. We don't even know what a computer is." And once you truly understand that, and once you truly believe that, then you're free, and you can think anything.'"

Mar 31, 2014 · 2 points, 0 comments · submitted by cygnus
Mar 30, 2014 · 4 points, 0 comments · submitted by stesch
Feb 16, 2014 · thangalin on Stack Overflow is down
While writing ConTeXt code (similar to LaTeX), I will reference the StackExchange network:

    % @see http://tex.stackexchange.com/a/128858/2148

Brett Victor asks, "How do you get communication started between uncorrelated sentient beings?" to introduce the concept of automatic service discovery using a common language.[1]

Alan Kay had a similar idea: that objects should refer to other objects not by their memory space inside a single machine but by their URI.[2]

When programmers copy/paste StackOverflow snippets, in a way they are actually closer to realizing Alan Kay's vision of meta-programming than those who subscribe to the "tyranny of a single implementation" -- or "writing" code as some would mock, expressing a narrow view of what they think "programming" a computer must entail.

The StackExchange network provides a feature-rich interface to document source code snippets that perform a specific task. What's missing is a formal, structured description of these snippets and a mechanism to provide semantic interoperability that leads to a universal prototyping language for deep messaging interchange.[3]

How else are we going to go from Minecraft[4] to Holodeck[5]?

Reactive Programming[1] (not FRP): Look up ThingLab[2][3]. Done in 1978 on Smalltalk by Alan Borning. Alan Kay typically points to Sutherland's Sketchpad (1963)[4] as inventing objects, computer graphics and constraint programming.

I have to admit I don't understand the hype over FRP. I mean it's great that you can now do reactive programing in FP as well, but it's not like this hasn't been around for ages.

Anyhow, what Alan does is not co-opting, it is pointing out all the great work that has been forgotten and then reinvented, usually badly, in the hope that someone will finally do a better job than what went before. See also Brett Victor's talk "The Future of Programming"[5]. Brett works for Alan now.

Others have pointed out VPRI[6]. Open Source programming languages that came out of there include OMeta (OO pattern matching)[7], Nile (dataflow for graphics)[8], Maru (metacircular S-Expr. eval)[9], KScript (FRP)[10], etc.

In terms of publishing papers: he's 73 for pete's sake. He doesn't have to publish papers, or do anything he doesn't absolutely want to. But in fact he doesn't just rest on his awards (Turing...) or patents or having had a hand in creating just about every aspect of what we now consider computing. He's still going strong.

So yes, there is a peanut gallery. You just may be confused as to who is sitting in it and who is on stage changing the world.

Dec 02, 2013 · 1 points, 0 comments · submitted by dhaneshnm
Aug 12, 2013 · 3 points, 0 comments · submitted by micampe
Aug 10, 2013 · 4 points, 0 comments · submitted by ColinWright
Jul 31, 2013 · slacka on The Future of Programming
Here you go:
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ [email protected]
;laksdfhjdhksalkfj more things