HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Bret Victor - The Future of Programming

Bret Victor · Vimeo · 65 HN points · 38 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Bret Victor's video "Bret Victor - The Future of Programming".
Vimeo Summary
For references and more information, see http://worrydream.com/dbx

Presented at Dropbox's DBX conference on July 9, 2013.

Bret Victor -- http://worrydream.com
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Take a look at

Bret Victor - The Future of Programming

https://vimeo.com/71278954

Programming could have been be something completely different.

I am a massive fan of Bret Victor and his talks. 'Up and down the ladder of abstraction' [0] was eye opening for me, and has informed my pedagogical style ever since.

I think, though, even though I agree broadly with what he has to say on visual programming and gaining an intuitive understanding of what programmes do, I feel like the emphasis he puts onto areas he feels are weaknesses are not the areas that I feel are the areas that are most in need of tooling.

Let me explain.

Victor puts a lot of emphasis on real-time visualisation of outputs, dedicating a lot of time in his talks [1] to drawing a tree and showing how visually representing how the code is drawing onto the canvas allows for greater exploration of ideas and possibilities, whilst creating a greater mental link between code and screen.

I agree that having this link is useful, but I think what it disregards is the amount of effort that it takes to get to a situation where you have an interactive real-time IDE for the purposes of drawing a tree. True, there is a deeper point he is trying to make about making ideas more visual, but the examples he uses and the aspects of modern computing that he criticises doesn't generalise well without a lot of interpretation.

He says in the closing paragraph of the featured article that these "are not training wheels" and that this is how all programming should be, but I think it would actually only be helpful in a subset of contexts, and commercially viable to invest in the tooling required in only a very small subset of cases.

I feel like a more applicable use of this IS training wheels - interactive explainables that can bring a topic or a concept to life which can then be applied to a number of different contexts that will then not need specialist software or tooling to facilitate. Examples would be Stephen Wittins [2], Betterexplained [3], Red Blob Games [4] or Nicky Case [5]

[0] http://worrydream.com/LadderOfAbstraction/

[1] https://vimeo.com/71278954

[2] https://acko.net/blog/how-to-fold-a-julia-fractal/

[3] https://betterexplained.com/articles/intuitive-convolution/

[4] https://www.redblobgames.com/pathfinding/a-star/introduction...

[5] http://ncase.me/trust

vnorilo
Bret Victor was definitely an influence when I sought to combine the visual graph-patching paradigm (for audio programming) with a repl-like experience on a more-powerful-than usual functional backend [1].

He does talk about real-time visualization of outputs. IDEs for building trees is definitely not where we want to go.

However, there's an aspect in there that I'm utterly convinced about: direct manipulation of program objects! The ability to grab hold of program objects, tweak and move them and see the response, almost like you had some cord or lever in hand that connects to the program, feels amazing. To me, it's the most amazing aspect of Sutherland's super early work [2]. Hancock's dissertation also deals with some of these things [3].

As you observed, we struggle with generalizing such direct manipulation. Especially the composition and abstraction of such components. That's the difference between a IDE to draw trees and a magic medium between thought and computing. Will we some day have the latter? I do not know.

1: https://kronoslang.io/ (It's all written by yours truly, including the website, WASM jit compiler and frontend, so please be gentle :)

2: https://blogs.commons.georgetown.edu/cctp-797-fall2013/archi...

3: https://dspace.mit.edu/handle/1721.1/61549

andrekandre
thats a realy cool web app, congrats on the quality!

unfortunately, i wasnt able to get any audio output on my ipad....

vnorilo
Thank you! Mobile Safari is the browser that breaks more often, will have to take a look again!
continuational
I think Bret Victor is right. We need more of those tools, and the future of programming will include them. No, his demos don't seem to generalize. Yes, it looks hard to arrive at that goal. But we absolutely should work towards it!
andrekandre

  I feel like a more applicable use of this IS training wheels - interactive explainables that can bring a topic or a concept to life which can then be applied to a number of different contexts that will then not need specialist software or tooling to facilitate.
this is probably arguing analogies, but i think its worth delving a bit more on the training wheels analogy

the reason i think so is because with training wheels they seem like a shortcut/helpful but actually they actively harm and mislead about how to ride (leaning vs turning) and you become dependent on them and cant really function without them, but if you start out with the bicycle low enough where kids (or even adults) feet can tough the ground easily, they will just naturally put their feet to the ground when they loose balance and (quite quickly) learn to balance and ride much faster than if they ever used training wheels [0]

my claim is that we should help people to program, but not with training wheels but in a way that naturally occurs from using the tool (it goes without saying thst this is _very_ hard to implement)

[0] https://kidsrideshotgun.com/blogs/news/stabilisers-suck-why-...

amitp
I agree. I've put a lot of work into building these types of visualizations (Red Blob Games) and I've been able to come up with useful visualizations for only some of the problems I work on, and for a small subset of the data structures. For example, with my A* pages I purposefully chose graphs that are planar and have at most a few hundred nodes, because that works well for the visualization. But if you were to ask me to make my A* visualizations work for any graph, including non-planar or millions of nodes, I wouldn't know how. I know some people want general purpose tools "for practioners" but I don't know how to get there, and years of making these visualizations has not gotten me any closer! :-( Maybe someone will come along and figure out a generalization, but I don't think it will be me. I think the "trailing wheels" style pages are a subset that clearly benefits from visualizations, so I'll focus on those.
lyxell
> I think what it disregards is the amount of effort that it takes to get to a situation where you have an interactive real-time IDE for the purposes of drawing a tree.

I agree with your view, but one of the reasons for this is that there is still a lot to ask from the GUI frameworks in most languages (especially languages like C++ and Rust). However, I still believe that this can change.

Raph Levien is driving some pretty targeted work on pushing the state of GUI frameworks for Rust [0].

In C++ Dear ImGui is heavily used by the game development community [1]. My experience is that it is really fast to get a visualization for most projects going in very little time using this framework. See the gallery threads [2] for examples. This project also gets better every month.

[0]: https://raphlinus.github.io/rust/druid/2020/09/28/rust-2021....

[1]: https://github.com/ocornut/imgui

[2]: https://github.com/ocornut/imgui/issues/3793

Zhyl
Ah, this is interesting. It seems like the rust community are doing similar things with updating UI paradigms as they are with updating CLI conventions!

Thanks for this, I will bookmark it and read it in more detail.

pjmlp
In what concerns C++, there is yet no other environment that matches the RAD tooling of C++ Builder.

However thanks to Borland mismanagement only old timers or devs working at enterprises are fully aware of its existence.

lyxell
I looked it up. The main reason that I'm excluded from the audience is not due to Borland but rather that I don't have Microsoft Windows as my operating system.
Bret Victor alluded to these pieces in his digs on APIs:

https://vimeo.com/71278954

jbb_hn
Wow, thanks for sharing that!

Hearing him name drop terms like markup language, stylesheets and internet is mind blowing. If only the promise of “no delay in response” had been achieved :-)

Apr 18, 2021 · 2 points, 0 comments · submitted by ksec
Dec 25, 2020 · macintux on Working at Apple (2011)
He gives some amazing talks, too. The Future of Programming is well worth a watch.

https://vimeo.com/71278954

There are possible alternative futures for technology beyond the web. We've charted a particular course throughout history to get here but there are other fruitful avenues to explore that are still open to technical pioneers like many people here on HN. Brett Victor gives a great talk with a similar premise[0].

The Web started life as a means of publicly sharing documents and has turned into a network of walled gardens. Rather than trying to fix the web itself, I think the solution is to try to recreate something true to the original spirit where users are publishers and where openness is the default.

What I'm working on specifically is best described as another take on Mozilla's Open Web App project[1] where applications are actually distributed to users for them to run themselves. The difference is that by adding visual programming into the mix, programs are realistically user editable and software freedoms start to take on a real significance to a whole new audience of empowered end-users. If you're interested in finding out more, I'm planning on publishing a blog post outlining the details in a couple of weeks time - watch this space.

[0] - https://vimeo.com/71278954

[1] - https://blog.mozilla.org/blog/2010/10/19/prototype-of-an-ope...

Dec 01, 2019 · 4 points, 0 comments · submitted by signa11
Jul 12, 2019 · 2 points, 0 comments · submitted by mavci
Fully agree. You forgot one!

Brev Victor - The Future Of Programming https://vimeo.com/71278954

Great stuff, like all of Bret Victor's talks. My favourite: is "The Future of Programming (2013)": https://vimeo.com/71278954

(I also liked this satirical blog post about that talk: http://alarmingdevelopment.org/?p=797)

Reminds me of Bret Victor's Future of Programming talk.

https://vimeo.com/71278954

Bret Victor's The Future of Programming has intriguing ideas on APIs that are tangentially related to this topic.

https://vimeo.com/71278954#t=14m

I've had this on my mind in various forms for almost a decade. It's only recently been crystallized from something abstract into much more concrete thanks largely in part to Bret Victor's talks. If you haven't already seen them, I highly recommend "Stop Drawing Dead Fish"[0] and "The Future of Programming"[1] as starting videos. Then just watch all the rest of his talks. :)

[0] https://vimeo.com/64895205

[1] https://vimeo.com/71278954

The Future of Programming would be a good introduction: https://vimeo.com/71278954

It's both inspiring and depressing.

I'm a particular fan of Bret Victor's "The Future of Programming". [1] It's a great look at the amazing number of ideas that the CS world has come up with and how we might be able to improve the act of building programs, even from "old" ideas.

[1] https://vimeo.com/71278954

CSMastermind
That was an absolute gem of a talk. Thank you so much for posting this.
chrisnordqvist
Absolutely loved this! Thanks for posting.
kevinSuttle
One of my all-time favorites as well. Pretty much any of his talks are legendary.
Oct 29, 2016 · 6 points, 0 comments · submitted by ninjamayo
Bret Victor, The Future of Programming

https://vimeo.com/71278954

qwertyuiop924
I'm not sure what your point is... If you're saying I'm resistant to new ideas, perhaps, to some degree. My point was more that if you want to make people receptive to a new idea, slamming them as ignorant fools for not knowing about it already, or condescending to them for not immediately genuflecting before it isn't the best approach.
pjmlp
The point being that we have seen the future like Bret puts it, and are to certain extent disappointed that the industry keeps doing left turns.

Many C++ users are finally enjoying environments like Clion and incremental compilers, well this was a C++ environment in 1993:

https://www.youtube.com/watch?v=pQQTScuApWk

Built on top of Lucid Common Lisp infrastructure, as Lucid pivoted their business to C++.

IBM had a similar one, that imported C++ into a database, offering a Smalltalk like development experience for C++, called VisualAge C++ Professional 4.0

http://www.edm2.com/index.php/IBM_VAC_3.0

So yeah, we are grey hair dogs disappointed how things evolve.

qwertyuiop924
Fair enough. And it is irritating, I won't lie. We're too perf-obsessed for our own good. But complaining about it, or condescending the way many do, won't do any good: Either you're preaching to the choir, or you're irritating people, thus making them less likely to listen to you (sometimes both in one person, like me. But then, I took a while to come around on a lot of this stuff).

If you want a future, build it. If you're already doing it, keep doing so, and find ways to win people over, not alienate them.

By the way, almost none of this is aimed directly at you...

PeCaN
Looking at software today, I'm note sure we're perf-obsessed enough actually. Fortunately, simpler, faster, and more powerful all go hand in hand if you pick the right abstractions.

And I am building something better—think AS/400 with an APL-inspired Forth dialect. It is certainly alienating, but (hopefully) the system will have enough fun demos to hook people, and the language runs on other OSes to tempt people in. ;)

analognoise
Do you have anything to show off of the other system you're working on? It sounds interesting.
PeCaN
Not a whole lot, no. There are some cryptic documents on the language and some various experiments I've done with the seL4 microkernel. Currently I'm working on getting the language solidified and the completing the implementation (and removing all the usual dependencies, so that it should be relatively straightforward to port to a fairly bare microkernel environment).
None
None
qwertyuiop924
That sounds cool :-). Let us know when you release.

As for your opinions on perf, I can see where you're coming from, but I'm a Lisp programmer. If I agreed with you, I likely wouldn't be one.

I see a couple of Bret Victor videos here, but the one I loved the most was "The Future of Programming": https://vimeo.com/71278954

Really set me on a path of re-examining older ideas (and research papers), for applications that are much more contemporary. Absolute stunner of a talk (and the whole 70's gag was really great).

"What would be really sad is if in 40 years we were still writing code in procedures in text files" :(

AKrumbach
Came here to give the same answer. Between "Future of Programming" and the Mother of All Demos (https://www.youtube.com/watch?v=yJDv-zdhzMY) I find it mindbogglingly depressing that it feels so much like the entire programming field has stagnated for the last 40+ years, re-implementing the same [flawed? limiting? incorrect?] ideas over and over.
CarlsJrMints
I really liked the message of each generation of programming considers the next "not real programming". It makes me reconsider the pushback against node-esque micro-packages: http://www.haneycodes.net/npm-left-pad-have-we-forgotten-how.... Maybe this is just the next logical evolution of programming.
Jun 06, 2016 · 2 points, 0 comments · submitted by miguelrochefort
Jan 11, 2016 · 2 points, 0 comments · submitted by arash_milani
We have surprisingly identical ambitions. BVic's "Future of Programming" is particularly relevent to several of your points and really puts how terrible our current system is in context.... https://vimeo.com/71278954

I feel like this sentiment (of "everything is wrong") is gaining traction recently (last year/two). Maybe it's just wishful thinking though...

sethjgore
Let's get together and work on this. There's nothing more important at this time and era of technology.

beep me at [email protected]

miguelrochefort
Although some people seem to share our understanding that things must change, it usually stops there.

I haven't seen anyone present a radically new paradigm for computing (especially consumer UI) in the last decade.

I intend to change that.

sethjgore
I intend to do that too. Why don't we all get on a chat room and work at it?
miguelrochefort
Let's do this.

Feel free to join: https://bettercomputer.slack.com

nikriek
Me and a fellow student are working on such a system: https://github.com/Symatem/Symatem. We are interested in joining the Slack group: Alexander.Meissner at student.hpi.de and Niklas.Riekenbrauck at student.hpi.de
theideasmith
I started, and because of this post will now resume, working on Automata: https://github.com/theideasmith/Automata.

Add me to the Slack, aclscientist[AT]gmail[DOT]com

rhyzomatic
I would also love to be a part of this.

Please add me! william [at] williambernoudy [dot] com

joelg
Please add me as well

joelg [at] mit [edu]

sethjgore
yes. invite me if you've already set up one?
i336_
I've been pondering how to attack the problem of making better UIs for going on 10 years now. Half the time I think I'm completely out of touch and that my ideas are irrelevant (with good reason), the rest of the time I wonder what would happen if I actually tried to implement some of the stuff I've come up with. (I strongly suspect I'd get a rude shock and realize how much iteration would be needed to make it usable, heh.)

I'd love to bounce ideas back and forth, but... agh, a Slack URL. I tried Slack once, felt too stifled, and bolted. The poor team I joined still has no idea where I went or why I left.

I'm torn between asking for an invite and just passing because, ironically, the software being used grates too much (XD)... maybe you could enable the IRC gateway?

One thing at a time though; if you're using Slack, so be it. My public email is in my profile. The private email I'd want to sign up under is different.

An interesting read but poor tools can't only be attributed to shipping culture. A more important reason is the tools are good enough (only just, but that's enough) and at that point priorities change. And once momentum becomes big it's hard to change direction i.e. Javascript, so hard to revisit the fundamentals.

Better tools are coming, such as Lighttable / Bret Victor's talk: https://vimeo.com/71278954 . But it's not clear when they'll arrive! Will we still be developing web-apps in Javascript after another decade? It feels like we can do better.

Most developers are hesitant to accept new programming paradigms, so it takes a very long time to change how people write code (something Brett Victor captured brilliantly in his talk, The Future of Programming [1]). Therefore, I can't really say if "ten years from now" is the right time frame or if it's more like thirty years.

That said, I think it's safe to say that static typing is "winning" if it ends up being used in almost every single one of the top 10 programming languages. By "used", I mean it's used in the vast majority of the core language libraries and by most major companies that depend on the language. Depending on which popularity chart you want to believe (e.g. langpop [2], TIOBE [3], or RedMonk [4]), the top 10 languages are roughly the various flavors of C (C, C++, C#, Objective C), Java, PHP, JavaScript, Python, Ruby, Perl, and Shell. With the exception of Perl (which will not be in the top 10 for much longer) and Shell (which will probably be in use for a long time), all of those are already statically typed or moving towards it.

I think a similar metric could apply to functional programming: if the top 10 languages use functional programming in the vast majority of the core libraries and by most major companies, then it has "won". This will likely take longer than static typing, but if companies keep adopting languages like Scala and Clojure, they may crack the top 10 some day.

Of course, it's worth mentioning that what's really "winning" right now is a hybrid model: a mix of static and dynamic typing and a mix of imperative and functional programming. Perhaps, in the long term, we'll find where each type of approach works best, use them side by side, and they'll both "win".

[1] http://vimeo.com/71278954 [2] http://langpop.com/ [3] http://www.tiobe.com/index.php/content/paperinfo/tpci/index.... [4] http://redmonk.com/sogrady/2015/01/14/language-rankings-1-15...

Dec 06, 2014 · 3 points, 0 comments · submitted by nullspace
This reminds me of Bret Victor's "Future of programming" at http://vimeo.com/71278954 particularly at 13:50 about APIs - envisioning how computers could themselves discover available services at other computers. Maybe this API index is just the first step towards enabling computer programs to independently interact with each other.
jarmitage
Same. Alan Kay talked about this once, I have a paraphrase to hand:

"This current effort of doing term-based ontologies is a disaster. It requires too much agreement [...] I can't find anyone working on this notion of general negotiation between anything that can negotiate [...] Negotiating meaning is the problem of our time."

meursault334
Do you know the source of this? I tried and failed to locate it. I would be really interested to watch this talk.
jarmitage
I must apologise by saying I can't! However, I am almost certain it came from a link on this site:

http://smalltalk.org.br/movies/

If you can't find it, you will at least find some other amazing material

apievangelist
Bret Victor has influenced a lot of my thinking that has gone into APIs.json and API Commons, and the work I've done with EFF on the Oracle v Google case, and how we need to get APIs to the next step. Thanks for the observation.
smizell
I think a key here is hypermedia. APIs can only be machine-discoverable if there are hypermedia links pointing to them (which is how this helps). If there are no links, then some human must hardcode it.

This also is a reason for including hypermedia in your API, because really, being machine-discoverable is not just something that the API itself benefits from... each resource and state can benefit from included hypermedia as well.

apievangelist
I agree that hypermedia is a more optimal solution for API discovery. APIs.io + APIs.json + machine readable formats like Swagger will provide us with a bridge between what we have, and what we should have (aka hypermedia). We'd all love to have a perfect reality, but unfortunately we get the one we have. ;-)
smizell
Definitely! :) Was just pointing out that this search engine was making discovery possible _because_ of hypermedia. Nicely done!
Aug 23, 2014 · 2 points, 0 comments · submitted by lelf
this reminds me of Bret Victor's talk: The Future of Programming. http://vimeo.com/71278954

Similar ideas on the problems with code editors / current programming, but Bret Victor's ideas are a little more fleshed out.

Also, if you haven't seen it, it's a pretty funny talk. Brilliant guy.

Aug 22, 2014 · audunw on Alice Pascal
Personally I think syntax-directed programming is the only thing that makes sense. It's 2014, why are we still programming using a batch workflow? Why are we not in a live feedback loop with the compiler?

There are natural answers to this of course - programming is insanely complex. I think history will look back at this time and see that we were struggling just getting programming to work at all. Making a decent syntax-directed programming environment would add so much more complexity to something that's already hard.

We don't even have the tools necessary to get started - what about an exchangeable tree format that doesn't suck?

I've played around a bit to create an AST in XML by hand, and using CSS to make it look like code. It's really interesting, suddenly you see that you're freed from caring how the syntax is formatted.. want curly braces around your blocks? Sure. Want just indenting? Have it your way. Want your function names in a larger font? No problem

But XML and CSS is not good enough. Not by far.

The other issue is that we've gotten really good at making text editors, and we know how to use them well. But tree structure editors? I don't even know of any. I suspect we'll always be editing parts of the tree as text anyway, especially expressions, it's just natural when the primary interface is the keyboard.

We're getting there though. Eclipse, VS and XCode goes quite far towards understanding the syntax and semantics of the code, and I understand that the primary motivator for Apple to go to LLVM was being able to integrate the compiler tightly into the IDE.

I'm sure many here have seen it already, but I would recommend looking at Bret Victors work for an idea of how things could be:

http://vimeo.com/71278954

agumonkey
I've heard many times that syntax direction is too strict and break people flow. The article says it's suited to students, here that wouldn't be an issue. That said, I'm pretty sold on the structure/tree editors vs char buffers, but I think there's need and opportunity for ideas to make building syntactically and type correct programs without slowing too much the user stream of ideas that can be random and at temporarily incorrect.
audunw
Indeed, I suspect in the beginning, you would want to convert the tree back and forth between text and structure, so that you can edit parts of it in text. Editing text is a bit like working with a scratch pad, you're jotting down ideas. But once you're down with a block or function, you want it to be correct.

The structure would then be for the editor to give you instant feedback on the syntax and semantics, for navigation, and for refactoring.

In a way, the most advanced text editors and IDEs already do something much like this, but it feels like they are still closer to a text editor than a pure structure editor.

The point is also that the editor is working on the exact same structure as the compiler, so the compiler can give you instant feedback on things you actually have to compile the code to find out. You could have dummy inputs to your function for instance, and instantly see what all the variables evaluate to, how long it takes for the function to execute, and how much space it takes.

agumonkey
Maybe it's CISC vs RISC all over again. That said I don't want to serialize subtree into linear strings to be able to insert/delete a few things. It's information loss to me. I envision something like subtree derivations to "fork" the current code block into a new one and substitute some tokens (not strings). Basically interactive tree rewrite.

Derive tests on the fly, add some visual cues (diffs, types, errors). Be creative, use a layout abstraction like DOM or even higher to experiment freely.

audunw
Another thing is that programming languages are not generally designed to be compatible with this. Pre-processed macros and comments etc.

Nimrod could be suitable for a syntax-directed programming editor. All macros are part of the AST, even the comments are embedded in the syntax tree.

seanmcdirmid
You can have a live feedback loop without syntax-directed editing. E.g. see http://research.microsoft.com/en-us/people/smcdirm/liveprogr... (tl;dr writing an incremental compiler/execution engine is nowhere near impossible).
TeMPOraL
> Why are we not in a live feedback loop with the compiler?

> what about an exchangeable tree format that doesn't suck?

> create an AST in XML

> But tree structure editors? I don't even know of any. I suspect we'll always be editing parts of the tree as text anyway, especially expressions, it's just natural when the primary interface is the keyboard.

I really think it's time for you to discover Lisp ;). Take a good look at the language, s-expressions (your "tree format that doesn't suck"), Emacs with SLIME and paredit (the tree structure text editor with live feedback loop with the compiler).

Since you said you were playing with ASTs in XML, I think this will be an excellent introduction: http://www.defmacro.org/ramblings/lisp.html.

seanmcdirmid
Lisp has been doing live feedback like the 70s since the 70s. It's time to move on from lisp machine.
pjmlp
And Smalltalk, Mesa/Cedar, Oberon, ...

Yet the majority of younger generations never experienced such environments.

To move on from Lisp live feedback, I would say we need to reach again, before going elsewhere.

seanmcdirmid
They are quite familiar with edit and continue (hot code replace). But they want the full Bret Victor experience, not some lesser variation that was possible to implement with 70s-level technology.

We can surely be inspired by the past, but let's not get stuck there.

TeMPOraL
Sure, but the thing is that problems audunw wrote about were all solved in Lisp and/or Smalltalk. People today keep reinventing things that we had in 70s, except those things from the past are much better, because when they were invented computers were slow and the industry wasn't run by fashion, so they had time to think about what they're doing and what problems they are trying to solve.

I'm not saying we should stay stuck with 70s tech; I say people need to actually look at it. It feels to me like breaktroughs in the industry come from recycling Protean technology, except the Antients who made it didn't live millions of years ago; they were likely drinking beer with your parents.

So before greenspuning a half-assed, bug-ridden clone of Paredit and calling it a Revolutionary New Editor, please, please look at the more geekier corners of the past. Realize, that we already have good solutions for all the problems you're trying to solve (audunw, in this case). Sure they're not perfect. But it's better to learn from the mistakes and improve on something solid than to run in circles.

Or maybe I'm just missing the obvious - http://xkcd.com/297/.

seanmcdirmid
They weren't better, there is just lots of rosy tinted tunnel vision from aficionados. We had good reasons for not going in the directions tried out in the 70s and 80s (e.g. Smalltalk images), while some of those directions have been nicely subsumed into current IDEs (e.g. Smalltalk's fix and continue).

If you truly think the old systems were better, then prove it! Don't just tell me about some magical system that existed 25 years ago. Show me.

TeMPOraL
> We had good reasons for not going in the directions tried out in the 70s and 80s

Those reasons have nothing to do with feasability, or usefulness of those ideas, and everything to do with popularity contest our industry is. It's not the best solutions that win, it's the most fashionable.

To give you one example - look at how suddenly functional programming is the hottest thing around. Out of the blue, FRP is the thing to solve all problems. It's 1970s technology. Just rediscovered now. I remember the times when functional programming wasn't hip, and I was getting laughed at for being a "Lisp (and Erlang) aficionado". But time has passed and the same people who were laughing then are now going to every possible Scala meetup and conference, because FP is now fashionable.

But we're getting a bit tangential here. The core idea of my original post: manipulating ASTs and tree structures in editor is a problem with old and good solutions most people don't know about, so, people, please go learn about them. It might save you some work (and spare the rest of us the horror of coding in XML because it's the next hip thing).

> If you truly think the old systems were better, then prove it!

I think there's only one way to do that - you have to play around with them, experience them yourself. Find yourself a local Emacs-using Lisp aficionado and let him show you the tricks, both in the editor and the language.

seanmcdirmid
I realized FRP wasn't going to work (at least for many use cases) back in 2007, but I had been working with it for awhile before.

I know emacs very well; it doesn't do much of what we are talking about. Here: http://research.microsoft.com/en-us/people/smcdirm/managedti... tree editing isn't needed as long as you know how to write a good incremental parser.

I'd tell them how horrible the "modern" laptop is and how we'd have to turn things around in order to prevent wasting away all that research for decades to come.

I'd show them this video: http://vimeo.com/71278954

I guess this is in a way a response to Bret Victor's "The Future of Programming"?

https://vimeo.com/71278954

exodust
Thanks for link. Liked the 70s vibe and humour.

From about 14:40 he gets animated, basically conducting! Would love to know a programmer's explanation for the function or purpose of arm waving and hand signals in a presentation. Not knocking, just curious!

guard-of-terra
Well, he just tries to reinforce that we have two symmetrical interconnected systems and yet they have to figure out how to talk to each other. What he has on the screen.
gary_bernhardt
It is in a sense. I had an early form of the idea that became this talk in the spring or early summer of 2013. Bret's talk (which I loved!) was released shortly after. That made me think "I have to do this future talk now in case the past/future conceit gets beaten into the ground."
Bret Victor - The Future of Programming http://vimeo.com/71278954

He actually speaks about a potential alternative present. ;)

Absolutely ruined the box I was thinking in.

Reminds me of the "Future of Programming" presentation given by Bret Victor [0]. Bret's talk is much more focused on the concepts that were created during the early period of CS, but abandoned(more or less) over the years rather than the major concepts that have persisted.

[0]: http://vimeo.com/71278954

Bret Vitor's talk is great, for those that still don't know it,

http://vimeo.com/71278954

I like to dig computer history, and when I look everything that was accomplished at Xerox and other places and what we got. The industry really moves at snail pace in some areas.

At Xerox the GUI systems were developed in Lisp, Smalltalk and Mesa/Cedar, the later being a systems programming language with automatic memory management.

Those systems had networking, GUIs, multiuser capabilities. Object embedding capabilities, the precursors of IDE and REPL environments.

Today we are still trying to re-invent those systems. How further would have we managed to be today, if those systems had been picked up as starting point.

Jan 15, 2014 · 2 points, 0 comments · submitted by surganov
For more information, see: http://worrydream.com/dbx/
Dec 24, 2013 · 3 points, 2 comments · submitted by lisper
cookrn
A previous thread on this: https://news.ycombinator.com/item?id=6129148
lisper
Ah. Thanks!
Dec 21, 2013 · Nzen on IDEas: Tools for Coders
Four months ago, Bret Victor published his "Future of Programming" Talk. The most inspiring part to me was his 'prediction' of discarding brittle api`s for systems that negotiate a communication protocol dynamically. (Toy example: modem filter negotiation) http://vimeo.com/71278954 Relevant explanation @ 13:30-16:30.
ericHosick
Ya. I saw that and it was awesome.

> api`s for systems that negotiate a communication protocol dynamically

I think the first step is to find a very easy way to describe communication between sub-routines/processes. Once people can hook things up and compose the interaction of software with foreign/unknown systems (all in real time while the software is running) then we can get systems to start doing it dynamically (run on sentence but ya).

My reading of the situation is this:

In the beginning, feminists were concerned with how women were being directly oppressed by men. They fought against this, and laws were changed and social norms were changed, and now things are better than they were.

As feminism has advanced, it has climbed a pyramid somewhat akin to Maslow's hierarchy[1]; as concerns about the most basic rights have been addressed, attention shifts to other matters, to questions that one might describe as "academic" or intellectual in nature - it's no longer just about freedom from violence and the right to vote, own property or control one's own body, it's about ideas and beliefs and questions which are both very fundamental to society but also somewhat abstract - philosophy, in other words.

At this point, it's not unreasonable for feminists to want to re-open questions that have been settled in Western thought for centuries, on the basis that the settlement was reached almost entirely by men, and many of these men would have held sexist views which could have influenced their conclusions. Some of these re-openings will prove to be pointless - it might turn out that sexism is orthogonal to the question of logic anyhow. But it's worth checking, just to be sure. And some of the re-openings may turn out to be very useful and important.

What this is not is a question about Dennis Ritchie's beard. There's no such thing as a sexist transistor or a patriarchal processor. But our notions of how these things work, of how they ought to work, and what questions are worth asking and which are irrelevant, those things are shaped by history, and we need to step back and consider the paths not chosen, because there may be possibilities deserving of consideration there. (This is basically what Bret Victor keeps telling us[2]).

This is kinda why I don't really understand the geek vs feminist antagonism that everyone seems to take for granted around these parts. The feminist perspective is an outsider perspective that seeks to understand the true workings of the world, asks questions that others find awkward, disrupts outmoded institutions where necessary, and seeks to empower those who are often misunderstood and mistreated by society. That is also exactly the geek perspective.

There are, of course, plenty of pretty terrible feminists who do want to make an issue out of Dennis Ritchie's beard. I think it helps to think of them as being to feminists as brogrammers are to geeks.

[1]: http://en.wikipedia.org/wiki/Maslow%27s_hierarchy_of_needs [2]: http://vimeo.com/71278954

cheez
I think you have the most accurate description of modern feminism. The "problem" with it is that they have to create their own calculus that works.

I don't think any real female scientist would take such things seriously while at the same time sympathising with the idea (which I do, as well).

aeorgnoieang
I found the first parts of your comment interesting.

Writing for myself, I'm definitely antagonistic about feminism because I feel antagonized by it, e.g. being subjected to shaming for 'being a tool of The Patriarchy'.

Bret Victor has been fantastic inspiration for me. I believed no programming video can make my jaw drop any longer, but Bret pulled it off. Hugely recommended: http://vimeo.com/71278954
codeonfire
Aesthetics can be fun and entertaining, but a video has to have some new and novel ideas before its called jaw dropping.
Reminds me of Brett Victor's “The future of programming” (http://vimeo.com/71278954), “Learnable Programming” (http://worrydream.com/LearnableProgramming/) and “Inventing on principle” (http://vimeo.com/36579366)
kineticfocus
Reminds me of Wolfram's earlier book... http://en.wikipedia.org/wiki/A_New_Kind_of_Science
Nov 07, 2013 · 2 points, 0 comments · submitted by f055
Bret Victor also has many other amazing and thought provoking talks:

Inventing on Principle: http://vimeo.com/36579366

Stop Drawing Dead Fish: http://vimeo.com/64895205

Drawing Dynamic Visualizations: http://vimeo.com/66085662

The Future of Programming: http://vimeo.com/71278954

Yep, I guess it is good occasion to recall Vitor's presentation for those who still don't know it.

"The Future of Programming", http://vimeo.com/71278954

Oct 07, 2013 · 1 points, 0 comments · submitted by lukasLansky
A very good one is "The future of Programming", http://vimeo.com/71278954.

Where he pretends to be in a late 70's conference, discussing how the current ideas about Smalltalk, GUI environments, parallel programming and so forth, can be used and how the computing world might look like in 30 years time (meaning nowadays).

joeblau
Yeah that was a very good one. I saw that a few days after it came out and he makes some very good points in that talk. I would love to be able to chat with him 1 on 1 and just ask him questions because he seems like he's thinking about problems in very new and exciting ways.
Visual programming used to leave me completely cold, but with the emergence of tablets and the promise not to make _everything_ visual, I hope to convince people otherwise. If you have not seen Bret Victor's brilliant "Future of programming" http://vimeo.com/71278954 and "Inventing on principle" http://vimeo.com/36579366, I highly recommend it.

I promise not to make the Haskell struct mistake.

mercurial
I'll try and find time for it. Thanks for the recommendation.
Aug 31, 2013 · zeckalpha on Grace Hopper
I think you need to watch Bret Victor's talk at DBX.

http://vimeo.com/71278954

jessedhillon
Yeah everyone's seen that talk. Any technology can look cool for 5 minutes. He really neglects to observe that all of the ideas he presented were thoroughly explored during their time -- and repeatedly in the decades since. The conclusion continues to be that such systems are inferior to our current ways. That may be because nobody has pushed hard enough, but it's not because those systems, as implemented, were good enough if only someone would just pay attention.
That is exactly because of those type of developers in the industry that the latest Bret Victor's talk is so interesting

http://vimeo.com/71278954

Basically he goes on to show how many people still code like the 70's instead of adopting languages and paradigms that were already possible in late 60's systems.

zzzeek
i hate watching videos but yes, this is really interesting - von neumann: "I don't see why anyone would need anything other than machine code". I deal with the resistance thing a lot in my work with ORMs, I should integrate some of this into my writing.
Aug 17, 2013 · hosay123 on Callbacks are Pretty Okay
This recent 'callbacks as goto' discussion is so utterly mundane that I've all but failed to convince my wrists to so much as twitch sufficiently to click on any of the links, just the title is enough to drive me to drink.

Callbacks are in no way "this generation's goto", they do not in any way inhibit the ability to analyse or prove correct a program, and all alternatives to callbacks amount to incredibly fancy syntax sugar for hiding what is really happening behind the scenes anyway.

A callback is a function call, or put another way, a specification of an exact set of arguments grouped in a formal specification that is sent to a deterministic little machine (the function) to execute immediately. On completion the machine will provably produce the result it promised to return in its owner's manual (i.e. the declared return code).

None of this is "goto" in any way. In goto land, without knowing the exact set of inputs a program receives and whole-system simulation, it is utterly impossible to make assumptions about even the most minimal pieces of state in the program at any given moment.

Contrast to a function call. A function call is a fixed guarantee about the state of the system at the point a machine begins executing. Furthermore a function call has a vastly restricted set of valid behaviours: at some point it must terminate, and prior to termination, update the system state (stack) to include its return value. And upon termination, result in the calling machine to resume operation.

All this syntax sugar shit is whack. Great, you typed 24 characters instead of 3 lines to declare a lambda. Hyper productive, go you. How does this progress toward the point where I can say "computer, make me a coffee!"?

If you're genuinely interested in what this generation's Goto might look like, take 30 minutes to watch http://vimeo.com/71278954 .. our UIs are utterly trapped in pre-1970s thinking, our communication networks are so utterly idiotic that we STILL have to write custom code to link disparate chunks of data and logic together, we're STILL writing goddamned function calls in a freaking text editor (something that was solved LONG ago). All the things like this.

I can't ask my computer to make me a cup of coffee, and it responds. I can't describe in restricted English a simple query problem and have the millions of idle machines around the world coordinate meaningfully to answer my question (and our best chances of this.. freebase.com.. got bought up by Google and left to rot as some vestigial appendage of their antispam dept no doubt).

Computing is in a sad state today, but not because of fucking callbacks. It's in a sad state today because we're still thinking about problems on this level at all.

Node.js wasn't an innovation. It was shit that was solved 60 years ago. I wish more people would understand this. Nothing improves if we all start typing 'await' instead of defining callbacks.

Innovation in our industry has been glacial since at least the early 90s.

Edit: and as a final note, I wholeheartedly welcome you to nitpick the hell out my rant, wax verbose on the definitions of provability, the concept of a stack, the use of English versus American language, why Vim is awesome, and in the process once more demonstrate the amoeba-like mentality 90% of our industry is trapped in.

Go and build a programming language your Mom can use. Preferably by talking to her computer. Please, just don't waste it on any more of this insanity.

macspoofing
>they do not in any way inhibit the ability to analyse or prove correct a program, and all alternatives to callbacks amount to incredibly fancy syntax sugar for hiding what is really happening behind the scenes anyway.

1) EVERYTHING is 'syntax sugar'.

2) There's nothing intrinsically wrong with gotos, they just aren't very well suited for human brains. Computers can execute goto statements very efficiently.

Callback-hell smells very much like gotos to me. It's very easy to do the wrong thing and easy to create very hard to read, hard to understand, and hard to maintain code.

marcosdumay
> There's nothing intrinsically wrong with gotos... Computers can execute goto statements very efficiently.

That's because the problem is analyzing the goto statements, not executing them. And computers suck at it, less than humans, but still suck.

i_s
You snipped out the part where he said exactly that.
KirinDave
> 1) EVERYTHING is 'syntax sugar'.

The OPs counter argument argument is against treating some syntactic sugar as special and novel when it is in fact not. Even voice commands are syntactic sugar, but they're way better for the lay user.

Although I dunno what he's talking about with "I can't talk to my computer." My phone is learning a lot of tricks, really fast.

> Callback-hell smells very much like gotos to me. It's very easy to do the wrong thing and easy to create very hard to read, hard to understand, and hard to maintain code.

This is mostly because people try to treat higher order functional code as in fact just a fancy syntax. Writing higher-order functional code (code that creates, consumes, and modifies functions) requires a set of alternative disciplines that most people in our industry not only don't learn, but actively despise and in many cases mock.

Even otherwise brilliant, smarter-than-me people I look up to do this and then declare the entire functional concept bankrupt. It drives me nuts. Once upon a time, when people saw OO code then turned their nose and said, "You shouldn't need to define hierarchies and ontologies just to reason about your code! How stupid is that!" But then proven models came out and the industry half-assed adopting it (remember Taligent? No? Look it up).

So many developers these days are whining and grousing about how static type systems inhibit their freedom and higher order functions are whacky propeller-head notions that only nerds take seriously. And yet they wonder why the bulk of the industry moves at a glacial pace. I'm happy that Erlang's nearly-30-year-old proposition and some of the implications Needham's duality are finally reaching mainstream computing.

arh68
> This is mostly because people try to treat higher order functional code as in fact just a fancy syntax. Writing higher-order functional code (code that creates, consumes, and modifies functions) requires a set of alternative disciplines that most people in our industry not only don't learn, but actively despise and in many cases mock.

Well it's a hell of a lot harder to write a good verb than it is to write a good noun. Similarly, writing good adverbs is nearly impossible (how many books on Lisp macros do you know of?). If I had to teach my Mom to code, I wouldn't teach her how to zip, fold, map & reduce lists-of-lists on the fly, I'd teach her the FullName noun.

Callbacks are just gotos that return. They can also pass along non-global context, like error information. It's a subtle distinction, and to be up in arms about it is indeed strange.

The industry moves slowly because they can afford to. A few million dollars can feed a hundred developers. The codebases get so large, the teams so big, that lowest-common-denominator kind of code will always prevail. Remember what I was going to teach my mom? Not lisp macros, no. Simple nouns, simple mechanisms.

KirinDave
Prepare for a solid fisking.

> Well it's a hell of a lot harder to write a good verb than it is to write a good noun.

See, that's your indoctrination talking. Really both are about equally hard. The actual definition of zip is pretty simple; assuming you have trained yourself to think about it the right way. This is no different from OO. The idea that imperative programming is "natural" is sort of a myth.

> (how many books on Lisp macros do you know of?)

Quite a few, actually! But I'm not sure why this matters;. Lips macros have 0 to do with not only this conversation, but this entire family of abstractions. Macros bear no resemblance to what we're talking about.

> If I had to teach my Mom to code, I wouldn't teach her how to zip, fold, map & reduce lists-of-lists on the fly, I'd teach her the FullName noun.

Why? People think verb-first all the time, describe things in verb-first ways, and act in verb first ways. They do it all the time, an it's not unnatural.

> Callbacks are just gotos that return.

Not really.

> They can also pass along non-global context, like error information.

If they are implemented with continuations, they do a lot more. But see also coroutines.

> The industry moves slowly because they can afford to.

I submit that the resurgence of the small software shop and the incredible successes that small software startups have been seeing is a counter-argument to this. As backwards as the average Node.js shop is, they're still light-years ahead of the befuddled, ossified monstrosities that they compete with.

> A few million dollars can feed a hundred developers.

You should be ashamed of this remark.

> The codebases get so large, the teams so big, that lowest-common-denominator kind of code will always prevail.

Bridges are not constructed this way.

> Remember what I was going to teach my mom? Not lisp macros, no. Simple nouns, simple mechanisms.

Stop patronizing people. You're pretty smug for someone who doesn't know lisp. I thought being smug was my job as a lisp hacker!

arh68
> The idea that imperative programming is "natural" is sort of a myth.

Yes yes yes. I definitely agree. It's "sort of a myth," but it also sort of true. Zip is indeed quite simple, but it wouldn't make any sense at all unless you knew what a list was. Case in point, zipping two lazy (infinite) lists like they're eager won't work at all; each version of a list would have its own zip. The verb will be more or less derived from the noun. A verbless noun makes sense, but a nounless verb? I think there is some dependency.

Ask some programmers if they learned function calls before they learned variable assignment. I'm obviously betting they didn't, but I'd be curious if I were wrong.

I really don't know what's "natural". I'd like to know, but I don't. I do play guitar. (poorly.) Playing a good chord is a lot harder (for a beginner) than playing a good note; I've seen plenty struggle, including myself. Now though, both are about equally hard. I have no preference. The actual structure of a -7 chord is pretty simple once you start to think in the right way. For some reason, though, beginners seem to like playing major chords and single notes. Similar story: when I was a child, I learned how to write the letters before I learned how to write the words. When I was a slightly older child, I learned the chess pieces before I learned the chess openings. It all goes hand in hand, but something's got to come first. I figure it's probably got something to do with how the brain acquires new patterns, but I know even less about neurology than I do programming.

I intended to relate adverbs to lisp macros, but I don't have a rock solid thesis on the matter. Macros can arbitrarily change the nature of functions just like adverbs change the meaning of nouns. In either case, overuse leads to crappy writing. I'd argue they're more awkward to write, but not because of any indoctrination. Just a personal thought.

I'd even bet cash that even a room of grad students could brainstorm/generate new nouns much faster than they can generate adverbs. This might not prove anything.

> You should be ashamed of this remark. > Bridges are not constructed this way. > Stop patronizing people.

I got confused by all this. I didn't intend to patronize anyone. Sorry.

KirinDave
> The verb will be more or less derived from the noun. A verbless noun makes sense, but a nounless verb? I think there is some dependency.

Linguistically inaccurate. But it also seems irrelvant. "Zipping things" is a great example of a placeholder noun, anything that can be zippable, right? People do this all the time. "Driving" implies that you have a thing to drive, but the act of driving is clear and distinct in people's heads despite the fact that it can represent a lot of different actions.

> Ask some programmers if they learned function calls before they learned variable assignment. I'm obviously betting they didn't, but I'd be curious if I were wrong.

The answer to this question is irrelevant, but also hard to understand. Variables and functions are deeply intertwined ideas because most function calls take variables.

SICP taught functions first, and it was widely acclaimed.

> Macros can arbitrarily change the nature of functions just like adverbs change the meaning of nouns.

I do not see an interpretation of macros that is concordant with this metaphor. Macros let you define new parts of speech entirely, hand-tooled to let you perfectly express what you want the way you find most natural.

> I'd even bet cash that even a room of grad students could brainstorm/generate new nouns much faster than they can generate adverbs. This might not prove anything.

I do not think this is relevant. But if you'd like to see an example of how complicated this is, look at any note from one partner to another. Mine go like this: "Dave, Please pick these items on your way home:" and then a list. Which is a function (in IO, so monadic since it has side effects)> But that is a verb THEN a list of nouns.

seanmcdirmid
>Go and build a programming language your Mom can use. Preferably by talking to her computer.

People who think this is doable with current technology usually don't have much clue in how to build a programming language.

KirinDave
People who think interactive decision tree building is hard generally haven't done even basic research in the most elementary AI techniques.

It is not hard.

The real barrier to something like that is in finding the right way to infer context for speech recognition and natural language parsing. You could easily bolt a system like that onto IFTTT and Twilio and get good results right out of the gate.

Free idea for you, there. Go make a startup.

hosay123
https://ifttt.com/

"Computer, create a rule for my car."

"OK. Which car? Do you mean Red Toyota Purchased 1997?"

"Yes."

"OK."

"Computer, when car is at location House, then boil kettle."

"Do you mean Kettle in Kitchen of 48 My Road?"

"Yes."

"OK. Is that all?"

"No. Computer, this rule is only valid on weekdays."

"OK. So when car Red Toyota Purchased 1997 is at location 48 My Road on Monday, Tuesday, Wednesday, Thursday, Friday, boil Kettle in Kitchen of 48 My Road?"

"Yes, but only if my iPhone was not inside the car."

"Do you mean Jemima's Apple iPhone 4Gs?"

"Yes."

"OK."

...

seanmcdirmid
I'm a PL researcher who has also worked a bit on dialogue systems. Getting those rules encoded for even a small domain would be a huge task, primarily because machine learning hasn't helped us in this area yet.

We will get there, but maybe in 5 or 10 years.

georgemcbay
Indeed!

Intelligent machine learning is 5 to 10 years off, just as it has been since the early 1980s.

ulisesrmzroche
Well put. I wasn't even born then, but things have changed so little that the first thing thing that comes to mind is that I really need to see Tron someday. I doubt computers will ever learn to talk humna, rather, as we all see more and more every day, humans are going to have to learn to talk computer.
seanmcdirmid
We will get there. It is just taking a lot longer than initially anticipated. I for one am not looking forward to trans-humanity, but I see it as inevitable.
michaelmior
If most humans eventually learn to talk computer, what's the difference between "human" and "computer" language?
georgemcbay
I was born in 1973 and by the 1980s a lot of people were convinced intelligent machines and nearly general purpose AI was just around the corner. There was tons of university and military funding going into projects to achieve this.

Of course, now the idea of that sort of blue-sky research is almost universally dead and the most advanced machine learning will likely be developed at Google as a side effect of much effort put into placing more relevant ads in front of us, which is kind of hilarious.

KirinDave
The primary challenge is the stated example is speech and NLP. It's not building the decision trees. We can do that today with off the shelf technology. Hell, it's the kind of thing people love to hack together with their Arduinos.

Microsoft even has a pretty slick thing called on{X} that does some amazing tricks with rules reacting to network events and geolocation tools.

Although I confess that commercializing a system that lets you asynchronously start potentially unmonitored exothermic reactions in your home? Probably hard. :D

hyc_symas
There are these wonderful inventions called automatic electric kettles, you should look into them. They seem to be commonplace in Europe, not so much in the US. But seriously, all it takes is a bimetallic strip to turn the kettle off when it reaches the desired temperature.

Eliza ran in 16KB of RAM. The logic flow is easy, all you need is a sufficiently large dictionary.

oblique63
Exactly. I think OP's argument was that we should devote more resources to these sorts of problems instead of bickering about low-level stuff so much (if they aren't directly helping that goal).

I don't see how that view implies ignorance of current technological limitations.

seanmcdirmid
The OP wrote:

>Computing is in a sad state today, but not because of fucking callbacks. It's in a sad state today because we're still thinking about problems on this level at all.

He seems to imply that it's sad today is today and not 20 years from now. So what?

People have dreamed about replacing "programming" with "dialogue systems" since at least the 60s; they are like flying cars, always 10 or 20 years away. We might be closer now, but it's not like we were not dreaming about this since before I was born.

In the meantime, we have code to write, maybe it's worth doing something about this callback thingy problem just in case the singularity is delayed a bit.

slacka
> We might be closer now, but it's not like we were not dreaming about this since before I was born.

It's not about dreaming. It's about action and attitude. Continuing down the current path of iterating on the existing SW/HW paradigm is necessary, but in that 20 years, it's not going to lead to strong AI. Our narrow minded focus on Von Neumann architecture permeates academia. When I was in college I had a strong background in biology. Even though my CS professor literally wrote a book on AI, he seemed to have a disdain for any biologically inspired techniques.

Recently, I've seen a spark of hope with projects like Stanford's Brains in Silicon and IBM's TrueNorth. If I was back in school, this is where I'd want to be.

http://www.technologyreview.com/news/517876/ibm-scientists-s...

http://www.stanford.edu/group/brainsinsilicon/

seanmcdirmid
Sounds familiar!

http://en.wikipedia.org/wiki/Fifth_generation_computer

1982 was 30 years ago, BTW.

slacka
Thanks for the link. After 60 years of promising to make computers think even the fastest machines on the planet with access to Google's entire database still have trouble recognizing a cat from a dog, so I have to agree with the article that yes it was "Ahead of its time...In the early 21st century, many flavors of parallel computing began to proliferate, including multi-core architectures at the low-end and massively parallel processing at the high end."

As 5th generation shows small pockets of researchers haven't forgotten evolution has given each of us a model to follow for making intelligent machines. I hope they continue down this road because faster calculators aren't going to get us there in my lifetime. You feel differently?

As the article mentions "CPU performance quickly pushed through the "obvious" barriers that experts perceived in the 1980s, and the value of parallel computing quickly dropped", where as 30 years later single threaded CPU performance has gone from doubling every 2 years to 5-10% per year since ~2007. Combine that with the needs of big data, and the time is right to reconsider some of these "failures".

Parallel programming is the biggest challenge facing programmers this decade, which is why we get posts like this on callback hell. Isn't it possible that part of the problem lies in decisions made 50 years ago?

I'm not saying we need to start from scratch, but with these new problems we're facing, maybe it's time reconsider some of our assumptions.

"It is the mark of an educated mind to be able to entertain an idea without accepting it." -Aristotle

thelukester
Funny a cryptographer was just explaining why you should never question your field's fundamental assumptions.

http://www.extremetech.com/extreme/164050-discovery-may-make...

seanmcdirmid
The problem is that we've been down this route before and it failed spectacularly (we've gone through two AI winters already!). It does not mean that such research is doomed to fail, but we have to proceed a bit more cautiously, and in any case, we can't neglect improving what works today.

The schemes that have been successful since then, like MapReduce or GPU computing, have been very pragmatic. It wasn't until very recently that a connection was made between deep learning (Hinton's DNNs) and parallel computing.

slacka
Yes, I did say continuing down the current path was necessary. From the languages to the libraries, today's tools allow us to work miracles. As Go has shown with CSP, sometimes these initial failures are just ahead of their time. Neuromorphic computing and neuroscience have come a long ways since the last AI winter.

The hierarchical model in Hinton's DNNs is a promising approach. My biggest concern is that all of the examples I've seen are built with perceptrons, whose simplicity makes them easy to model but share almost nothing in common with their biological counterparts.

oblique63
That's more of an orthogonal point than a counter point though. Just because we haven't accomplished something yet doesn't mean we shouldn't invest any more effort into it.

I was responding to your insinuation that OP must not be knowledgeable about language implementation to hold the views he/she does. In this context, technological limitations are irrelevant because that was not the point; the point was an opinion about the relative effort/attention these problems receive.

I'm admittedly not really invested in this area myself so I don't really care, but it's disingenuous to try and discredit OP's views like that. At least this response is more of a direct counter-opinion.

tluyben2
There are not that many researchers (or other people) working on 'less code'. And the response from the community is always; don't try it, it's been done and it won't work (NoFlo's Kickstarter responses here on HN for instance).

Instead I see these new languages/frameworks which have quite steep learning curves and replace the original implementation with the same amount of code or even more.

As long as 99% of 'real coders' keep saying that (more) code is the way to go, we're not on the right track imho. I have no clue what exactly would happen if you throw Google's core AI team, IBM Watson's core AI team and a few forward thinking programming language researchers (Kay, Edwards, Katayama, ...) in a room for a few years, but I assume we would have something working.

Even if nothing working would come out, we need the research to be done. With the current state we are stuck in this loop of rewriting things into other things, maybe marginally better to the author of the framework/lib/lang and a handful of fans resulting into millions of lines of not very reusable code. Only 'reusable' if you add more glue code than the original code in the first place adding only to the problem.

seanmcdirmid
Nothing was new with NoFlo, it was just trying the same thing that failed in the past without any new invention or innovation. The outcome will be the same.

Trust me, the PhBs would love to get rid of coders, we are hard to hire and retain. This was at the very heart of the "5th gen computing movement" in the 80s and it's massive failure in large part led to the following second "AI winter."

> I have no clue what exactly would happen if you throw Google's core AI team, IBM Watson's core AI team and a few forward thinking programming language researchers (Kay, Edwards, Katayama, ...) in a room for a few years, but I assume we would have something working.

What do you think these teams work on? Toys? My colleague in MSRA is on the bleeding edge of using DNNs in speech recognition, we discuss this all the time (if you want to put me in the forward thinking PL bucket) over lunch...almost daily in fact. There are many more steps between here and there, as with most research.

So you are unhappy with the current state of PL research, I am too, but going back and trying the same big leap that the Japanese tried to do in the 80s is not the answer. There are many cool PL fields we can develop before we get to fully intelligent dialogue systems and singularity. But if you think otherwise, go straight to the "deep learning" field and ignore the PL stuff, if your hunch is right, we won't be relevant anyways. But bring a jacket just in case it gets cold again.

tluyben2
I agree with you on NoFlo and criticism like yours is good if it's well founded. I just see a bit too much unfounded shouting that without tons of code we cannot and can never write anything but trivial software. The NoFlo example was more about the rage I feel coming from coders when you touch their precious code. Just screaming; "it's impossible" doesn't cut it and thus i'm happy with NoFlo trying even if doomed to fail, it might bring some people to even consider different options.

> What do you think these teams work on? Toys? My colleague in MSRA is on the bleeding edge of using DNNs in speech recognition, we discuss this all the time (if you want to put me in the forward thinking PL bucket) over lunch...almost daily in fact. There are many more steps between here and there, as with most research.

No definitely not toys :) But I am/was not aware of them doing software development (language) research. Happy to know people are discussing it during lunch; wish I had lunch companions like that. Also I should've added MS; great PL research there (rise4fun.com).

I wasn't suggesting a big leap; I'm suggesting considerably more research should be put into it. Software, it's code, it's development and bugs are huge issues of our time and I would think it important to put quite a bit more effort in it.

That said; any papers/authors of more cutting edge work in this field?

seanmcdirmid
My colleague and I talk about this at lunch with an eye on doing a research project given promising ideas, but I think I (the pl guy) am much more optimistic than he (the ML/SR guy) is. I should mention he used to work on dialogue systems full time, and has a better grasp on the area than I do. I've basically decided to take the tool approach first: let's just get a Siri-like domain done for our IDE, we aren't writing code but at least we can access secondary dev functions via a secondary interface (voice, conversation). The main problem getting started is that tools for encoding domains for dialogue systems are very primitive (note that even Apple's Siri isn't open to new domains).

The last person to take a serious shot at this problem was Hugo Liu at MIT. Alexander Repinning has been looking at conversational programming as a way to improve visual programming experiences; this doesn't include natural language conversation, but the mechanisms are similar.

tluyben2
I would think that this is why PL research if very relevant here; until we have an 'intelligence' advanced enough to distill from our chaotic talking about an intended piece of software, I see a PL augmented with different AI techniques to explain, in a formal structure (with the AI allowing for a much larger amount of fuzziness than we have now in coding; aka having the AI fix the syntax/semantic errors based on what it can infer about your intent after which, preferably on a higher level of the running software, you can indicate if this was correct or not) how a program should behave.

With some instant feedback a bit like [1] this at least feels feasible.

[1] http://blog.stephenwolfram.com/2010/11/programming-with-natu...

randallsquared
The distinction between "is at" and "arrives at" is going to result in a high power or house gas bill. :)
stormbrew
This just demonstrates that the problem, for laypeople, isn't so much the programming as the debugging. What does "your mom" do when she realizes the kettle is on for nearly 18 hours every day because that's how long her car is there?

Being able to communicate effectively, in a bidirectional manner, with a computer is a skill. It's a skill people will probably always have to learn, one way or another. It's more likely that more people will learn these skills than that we'll devise a way for computers to be the smart ones in the conversation any time soon.

derefr
> I can't ask my computer to make me a cup of coffee, and it responds.

Question: what exactly would you like your computer--that beige box under your desk--to do, if you ask (or rather, command) it do do that? This sounds more like an embodied intelligence/robotics problem than a problem with constraint-solving goal-directed AI, per se.

Unless you want it to use Mechanical Turk to fire off a task to have a human make you a coffee. I could see that working, but it's probably not what you'd expect to happen.

STRML
If you truly believe that we're all missing the point, how do you propose it be fixed? How far down the stack do we need to start disassembling parts and rebuilding? And who, if anyone, is doing that now?
None
None
zurn
> [Callbacks] do not in any way inhibit the ability to analyse or prove correct a program

It's obvious to me that they do: humans are not that good at keeping track of these fragmented chains of flow control. They're just more balls in the air for the juggler.

In building systems, we are using many layers of abstractions all the time (or "syntax sugar" as you say).

mgkimsal
The answer to that is obvious. At least a handful of people are expert jugglers - in fact, these people are so good at it, they've never actually been good at handling one thing at a time - they constantly juggle everything, and have been doing so for decades. They've just been dying for the world to make juggling mandatory, because now they can truly shine.

All the other 99% of us need to do is take juggling classes for the next several years and reorient all of our work around juggling. Just quit doing things the way you've done (successfully) for the past several years. Problem solved.

aufreak3

    Computing is in a sad state today, but not because of
    fucking callbacks. It's in a sad state today because 
    we're still thinking about problems on this level at all.
The unfortunate part about thinking about problems is it that problems are relative - i.e. they are problems only relative to a context. Nobody even in the tech world has a clue how solving a simple problem just to satisfy some intellectual curiosity can affect life on the planet. What meaning does Wiles' proof of Fermat's last theorem have on the part of the world that's sleeping hungry every night?

The stuff we're talking about here has irked some pretty decent brains enough for them to go out and make ... another textual programming language.

Folks like Musk are an inspiration indeed for boldly thinking at the high level they do and seeing it through. However, that's exactly what's easy for press to write up on. Stuff like the moon speech is what's easy to communicate. It is also simple to attribute some great deed to one of these great "visions". But behind each such vision lies the incremental work of millions - each of it, I say, worth every penny - unsung heroes, all of them.

On a related point, not all the time where we've heard a call to greater action do we see the ability in those folks to imagine how that might come about.

edit: typos.

KirinDave
> The unfortunate part about thinking about problems is it that problems are relative - i.e. they are problems only relative to a context. Nobody even in the tech world has a clue how solving a simple problem just to satisfy some intellectual curiosity can affect life on the planet. What meaning does Wiles' proof of Fermat's last theorem have on the part of the world that's sleeping hungry every night?

Like the city you live in right now?

> Folks like Musk are an inspiration indeed for boldly thinking at the high level they do and seeing it through.

"Folks like Musk" are smart enough to know that the progress of humanity, the great and meaningful leaps of technology to strive for, are accomplished by people solving those hard problems under a common banner.

A consequence you've evidently yet to learn, or are afraid to realize.

aufreak3
> "Folks like Musk" are smart enough to know that the progress of humanity, the great and meaningful leaps of technology to strive for, are accomplished by people solving those hard problems under a common banner.

That's an interesting aspect of their work you point out. It made me think about describing my "banner" and add it to my HN profile.

It would be cool if we can all identify such "banners" under which we work in our HN profiles. Note that the banner doesn't have to be a technological breakthrough. In my case, for example, I wish for cultural impact.

KirinDave
> It would be cool if we can all identify such "banners"

These days these "banners" are LLCs or C-corps. :)

aufreak3
Though true, the "banners" aren't always obvious. Some folks may have a complementary mission. The golang authors are perhaps not directly working to "organize the world's information", for instance and their banner would read "making it easy to build simple, reliable, and efficient software".

It is still, a nice exercise to do it though ... and I might turn mine into an LLC anyway :)

beambot
> Furthermore a function call has a vastly restricted set of valid behaviours: at some point it must terminate, and prior to termination, update the system state (stack) to include its return value.

Hooray! We've solved the halting problem. Without formal analysis, it's not possible to show (in general) that a function will terminate.

More seriously, I've seen function behavior that's just as bad as other types of flow control (including gotos).

KirinDave
Most structured programming ignores this problem as unless you assume some best effort from the programmer, all programming is futile.

You don't exercise your Big-O thinking here, you exercise your Big-Theta.

tel
You're looking in the wrong places. Nobody thinks we're close to building a computer you mom can talk to, but we are getting closer to understanding what computer languages are and their properties. On that end research is blistering and exciting.

But the industry has no need for that stuff.

kazagistar
My Mom prefers to do her development in C, because she needs to make sure her search algorithms are very high performance. So it seems she already has a pretty good language for her use cases?
abritishguy
>incredibly fancy syntax sugar In JavaScript yes but the way python handles async stuff is not 'fancy syntax sugar' it is actually very clever - I'm not saying Javascript should do things the way python does, they are fundamentally different and I actually quite like callbacks in javascript but I think you should be clear that your comments are made in relation to Javascript (assuming they were).
jes5199
I agree with you.

It occurs to me that if you're using a whole bunch of anonymous functions with nested lexical closures, then the shared state between them starts looking a lot like global variables.

And globals are evil, but relative to the other nightmares that Gotos could produce, they're pretty mild.

kaoD
That's not callback's fault, but closure's. Callback's need not have shared state.
2mia
So true: `Node.js wasn't an innovation. It was shit that was solved 60 years ago.`

We're just reinventing the same thing because some sh*ty stuff got momentum. Take websockets, how is now a webserver better then IRCD hacked with an implmentation of fcgi?

PS: vim is awesome :)

wslh
I don't like callbacks but I don't have anything to add to the discussion except this irony:

Do you want callbacks? why instead of writing var a = 1 + 2; don't like to write:

var a; sum(1,2, function(result) { a = result; });

alexakarpov
That. Also, this: http://this-plt-life.tumblr.com/post/55420560033/when-somebo...
dccoolgai
Thank you for providing some sanity to this callbacks inquisition. I am getting so tired of all the "callbacks are evil, js is evil" crusade. Sorry your pet language isn't supported by every browser ever.
derleth
> a freaking text editor (something that was solved LONG ago)

It wasn't solved, it was replaced by a different, and worse, set of problems. Other people having minds organized differently from yours isn't evidence of a usability problem.

The problem with programming isn't the syntax. The problem is teaching a Martian to smoke a cigarette: Giving instructions to something which shares no cultural background with you and has no common sense.

(Of course, the classic 'teach a Martian to smoke a cigarette' problem is rigged, because the students are never told what instructions the Martian can follow; therefore, anything they say will be misinterpreted in various creative ways depending on how funny the teacher is. On the other hand, playing the game fairly by giving the students a list of things the Martian knows how to do would reduce the thought experiment to a tedious engineering problem.)

> Go and build a programming language your Mom can use.

This sexism is worse than anything else in your barely-thought-through rant of a post. The blatant, unexamined sexism in this statement keeps fully half of the potential programmers from picking up a damn keyboard and trying.

mgkimsal
> This sexism is worse than anything else in your barely-thought-through rant of a post. The blatant, unexamined sexism in this statement keeps fully half of the potential programmers from picking up a damn keyboard and trying.

I'd throw in 'ageism' in there as well, being someone who is now 'of a certain age'.

JPKab
This is so absurd. There is absolutely nothing sexist about this statement unless you WANT it to be sexist. If a woman said this, would it be sexist? No. But a man saying it would make it so. What if a female programmer said "something your Dad can use"? Would she be sexist and ageist also?

Probably not. I'm a left-of-center guy, but I'm so sick of this politically correct bullshit coming out of left field. The complete inability to understand that not everyone is thinking of the broad, social contexts of oppression everytime they utter a statement.

You KNOW WHAT HE MEANT when he said that statement, but in the typical, annoying trait unique to yuppie white assholes, you seek to distance yourself from a heritage of being an oppressor by constantly pointing out racism/sexism/ageism. It's a game, and it doesn't matter whether what you are pointing at is REAL, AND AFFECTS SOMEONE, it just matters that you score your points to prove to the professional victim class that you're not one of the bad guys, even though you look like one.

Please, feel free to protest the JIF peanut butter slogan: "Choosy moms choose JIF". But no, there isn't an organized professional victims organization around single dads, so nobody will be protesting that, because why would you? There are no points to score.

Newsflash: Your parents and grandparents were probably racist bigots like every other cracker in this country. Your game does nothing to help. A woman who is being denied a promotion at work because of her gender will get zero help from your bullshit game. She will instead be hurt by it, because of the "crying wolf" that idiots like you do for your game that causes eye-rolling in 95% of the population.

derleth
Fuck you and anyone who uses the concept of social justice as a hammer to try and silence people they disagree with. That is another serious barrier to actually solving any of these problems.
JPKab
Social justice as a hammer is exactly the idiocy I was pointing at. Nitpicking a comment taken out of context as an affront to an oppressed group is exactly that.
msie
but in the typical, annoying trait unique to yuppie white assholes, you seek to distance yourself from a heritage of being an oppressor by constantly pointing out racism/sexism/ageism

It seems to be a common tactic to accuse someone of white-guilt to quiet them down.

seanmcdirmid
I think you both are overreacting. "Something your mom could use" could easily be translated into "something a typical person could use." I don't think any girls or retired folk were dissuaded from programming by this comment.
zanny
"Something your dad / grandfather / uncle" can use is just as apt. The point isn't "durr hurr you need a Y chromosome to do turing complete mathematics" its "99.99% of the population can't program. Write a language they can use"
seanmcdirmid
That would offend even more as it strongly implies that "mom could never program of course, but maybe we could build a language where dad could?" If we want to be all PC about it, we should use gender neutral terms; like "something your parent figure could write programs with."

Crazy.

acjohnson55
Dissuaded by that single comment, probably not, but a constant flood of low-level sexual bias does send a message to girls that they don't belong. If you follow some of the gender-flipping reactions to pop culture, it begins to strike you just how pervasive these messages are.
seanmcdirmid
The alternative "build a programming language that Dad can use" would have been much worse. I hope you see that. Unless gender bias is removed completely from our language (build a PL that your parent can use), the OP actually selected the lesser of two evils.
acjohnson55
Wasn't aware that it was a strict dichotomy. There are plenty of ways to get the point across without gendering it. And I'm not trying to bash the OP at all. It's just that I do think it's important that we all hold each other accountable. But thanks for engaging, a lot of people don't even recognize there being a problem in the first place.
coldtea
>Callbacks are in no way "this generation's goto"

They are very much analogous to GOTO -- or even COMEFROM.

They have a similar effect to the control flow, and a similar adverse effect on the understanding of the program.

And Dijkstra's core argument against GOTO applies 100%:

"""My second remark is that our intellectual powers are rather geared to master static relations and that our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible."""

>all alternatives to callbacks amount to incredibly fancy syntax sugar for hiding what is really happening behind the scenes anyway.

If, for, while, foreach --heck even map, reduce et co, etc are also syntax sugar for GOTO. Your point? Or adding the weasel word "fancy" somehow makes this particular syntax sugar bad?

Not to mention that there's nothing "incredibly fancy" about await, async, promises et co.

And if I wanted to know "what's really happening behind the scenes" I wouldn't programmer in Javascript in the first place.

gruseom
I thought of that Dijsktra quote too. It sheds light on what's wrong with the OP's proposal: in return for less nesting, it worsens the conceptual gap between the program (in text) and the process (in time). A poor trade.

I've found that nesting to indicate asynchronous control flow can be quite a good device for keeping the program text closely related to its dynamic execution. (It's true that such code is hard to read in JavaScript, but I don't think that's because of nesting per se.) It allows you to lay out synchronous logic along one dimension (vertically) and asynchronous along another (horizontally). I hypothesize that there's a quite good notation waiting to be brought out there; unfortunately, such experimentation tends to be done only by language designers in the domain of language design, when it really ought to be done in the context of working systems—and that's too hard to be worth the trouble unless you're programming in something like a Lisp.

coldtea
>I thought of that Dijsktra quote too. It sheds light on what's wrong with the OP's proposal: in return for less nesting, it worsens the conceptual gap between the program (in text) and the process (in time). A poor trade.

That's like saying that "by using FOR instead of GOTO we worsen the conceptual gap between the program (in text) and the process (in time)".

Only we don't worsen it -- we just abstract it to a level in which we can reason about it better.

Higher level is not worse -- except if you consider "more distance from what is actually happening at the CPU" as worse. Which is not: historically the higher the distance, the more programmers got done.

We don't need to keep track of the actual low level process in time -- which with callbacks we're forced to. We just need to know how the steps of the process relate. Which await and co offer in a far better form than callbacks.

gruseom
We're not talking about distance from the CPU. At least I'm not, and I'd be shocked if Dijkstra were. No, the issue is that a program has at least two different logical structures: its lexical organization in text, and its dynamic organization in time. You want these two to be as closely related as possible, because humans rely mostly on the program text to understand it.

"Time" here doesn't mean CPU time, it means logical time--the temporal (as distinct from textual) order of operations in the program. To put it another way, the "time" that matters for program clarity is not when stuff happens at lower levels (e.g. when does the OS evict my thread) but what happens when in the source code itself. This is not so far from your phrase, "how the steps in the process relate", so I don't see the disagreement.

I certainly don't agree, though, that the OP's design recommendations lead to higher-level or easier-to-reason-about code. Do you really think they do?

aufreak3
> ... the issue is that a program has at least two different logical structures: its lexical organization in text, and its dynamic organization in time.

That's a concise way to put it and I'll certainly remember and reuse it! Did you come up with it or did you come across it somewhere?

gruseom
It's just a paraphrase of the Dijsktra quote. The same idea is implicit in the distinction between lexical and dynamic scope, which confused me for years before I got it... probably because "dynamic scope" is something of an oxymoron.
kaoD
You're not the first person I read mentioning "if" as a syntax sugar for "goto". It is not. In fact, "goto" needs "if" to replicate looping behavior.

Unless you imply "goto" can branch conditionally (e.g. je/jne/jg etc. from ASM) but that's not "goto". "Goto" means unconditional jump.

This is a nonsensical comment, and I voted it down. The same point can be made about any time languages got a level higher. This kind of rejection of powerful in favor of complex-but-familiar is precisely what Bret Vector warns against in the Future of Programming talk[1].

If anything, `await` makes debugging easier because you don't have to untangle callbacks and jump back and forth. You're not supposed to “debug the compiler” because, well, you know, there are test suites and everything.

Yes, this is something that takes getting used to. Just like `for` loops, functions, classes, futures, first class functions, actors and many other useful concepts and their implementations.

As for your edit, I still can't agree with you.

You're saying:

>I prefer the former because it's much easier to reason about code written in a simple language than to memorize the semantics of a complex language.

The point of `async` is making the semantics more obvious. Is it much easier to reason about Assembler than C? I say it's not. Would it be for somebody with years of experience in ASM and none in C? Yes it would.

I think it just comes down to that. Callbacks seem simpler to you not because they are simpler (try explaining them to someone just learning the language, and you'll see what I mean), but because you got used to them. Even so, error handling and explicit thread synchronization make maintaining callback-ridden code painful. I think setting `Busy` to `false` in `finally` block is a great example (in the blog post). You just can't do that with nested callbacks—they are not that expressive.

Async allows you to think in structure (`for`, `if`, `while`, etc) about time, that's why it's powerful.

[1]: http://vimeo.com/71278954

dap
The sarcasm in my post was unnecessary, so I've replaced it with a better explanation.
danabramov
Thanks for taking time!

I still don't agree though, I edited my post as well to explain why I think this is exactly the moment you need to tweak the language, and not the libraries. (And this is the point Miguel was trying to make when he differentiated `async` from “futures” libraries, even from the one `async` uses, because they are irrelevant to the discussion.)

pjmlp
Thanks for mentioning the Bret Vector's talk. Quite interesting to watch.
None
None
dap
"Callbacks seem simpler to you not because they are simpler (try explaining them to someone just learning the language, and you'll see what I mean), but because you got used to them."

No, they're simpler in the literal sense: they introduce no new concepts into the language or runtime semantics. (The dynamic behavior is still complex, of course.)

"Even so, error handling and explicit thread synchronization make maintaining callback-ridden code painful. I think setting `Busy` to `false` in `finally` block is a great example (in the blog post). You just can't do that with nested callbacks—they are not that expressive."

Right -- nested callbacks aren't the answer, either. In JavaScript (where most of my non-C experience comes from), a good solution is a control flow function:

    busy = true;
    series([
        function (callback) {
             // step 1, invoke callback();
        },
        function (callback) {
            // step 2, invoke callback();
        },
        function (callback) {
            // step 3, invoke callback();
        }
    ],
    function (err) {
            // finally goes here
            busy = false;
            if (err)
                // ...
    });
This construct is clear and requires no extension to the language or runtime.

This is fundamentally a matter of opinion based on differing values. I just want to point out that there's a tradeoff to expanding the language and to dispel the myth that callbacks necessarily trade off readability when control flow gets complex.

Well, a few weeks ago I would have agreed to your opinion.

But after watching Bret Victors great Talk about The Future Of Programming (https://vimeo.com/71278954) I don't think you're 100% right.

Right at the moment I also prefer hand written code over that generated code by some Adobe tool. But as Bret Victor puts it in the talk, I think it would be shame if we create applications in 20 to 40 years in the same way we do today.

Or to put it into another way, who really knows what programming is? Is programming something you do when you put characters into a text editor or is it something you do when you click something together in a tool like Macaw?

I don't think we should wear blinkers but instead should be open for new ways of thinking and approaching stuff. In the end, as long as computers do the stuff we want them to do, I don't really care about how its done anymore.

Just my 2 cents.

vickytnz
I'd like to think that Macaw is like training wheels (or a fixie bike) for getting into web development. Is it production ready-code? Probably not. Does it do a good job of showing how things should work? Hell yes.
kazaroth
Agree 100%.

Devin's post and attitude are part of the problem and why development is still utterly stuck in writing-lines-of-text in an editor mode. As if that's some canonical 'best way' to produce software.

Suggest Devin goes to Bret's site and reads some of his essays about tools. Software engineering is currently being massively held back (primarily in terms of efficiency) by the reluctance to build out and support tools to abstract creation further away from text editors. IMO.

Jul 30, 2013 · 9 points, 1 comments · submitted by GuiA
scribu
See https://news.ycombinator.com/item?id=6129148
Jul 30, 2013 · 25 points, 1 comments · submitted by jf
jf
Don't miss the references for this talk: http://worrydream.com/dbx/
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.