HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Is it really "Complex"? Or did we just make it "Complicated"?

da009999 · Youtube · 420 HN points · 26 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention da009999's video "Is it really "Complex"? Or did we just make it "Complicated"?".
Youtube Summary
Alan Kay, education, process science, and economics of mediocrity.
Original file: https://vimeo.com/82301919
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Adding on: Alan Kay gave several fascinating talks about the project. Such as https://youtube.com/watch?v=ubaX1Smg6pY
Jan 16, 2022 · DonHopkins on Lisp Badge (2021)
The original Lisp badge (or rather, SCHEME badge):

Design of LISP-Based Processors or, SCHEME: A Dielectric LISP or, Finite Memories Considered Harmful or, LAMBDA: The Ultimate Opcode, by Guy Lewis Steele Jr. and Gerald Jay Sussman, (about their hardware project for Lynn Conway's groundbreaking 1978 MIT VLSI System Design Course) (1979) [pdf] (dspace.mit.edu)

http://dspace.mit.edu/bitstream/handle/1721.1/5731/AIM-514.p...

HN discussion (excerpted with updated archive.org links):

https://news.ycombinator.com/item?id=8859918

gavinpc quoted Alan Kay from this video:

https://news.ycombinator.com/item?id=8860680

gavinpc on Jan 9, 2015 | next [–]

Here is Alan Kay from a talk that was just on HN today.

https://www.youtube.com/watch?v=ubaX1Smg6pY&t=8m9s

    The amount of complication can be hundreds of times more than the
    complexity, maybe thousands of times more.  This is why appealing to
    personal computing is, I think, a good ploy in a talk like this because
    surely we don't think there's 120 million lines of code—of *content* in
    Microsoft's Windows — surely not — or in Microsoft Office.  It's just
    incomprehensible.
    
    And just speaking from the perspective of Xerox Parc where we had to do this
    the first time with a much smaller group — and, it's true there's more stuff
    today — but back then, we were able to do the operating system, the
    programming language, the application, and the user interface in about ten
    thousand lines of code.
    
    Now, it's true that we were able to build our own computers.  That makes a
    huge difference, because we didn't have to do the kind of optimization that
    people do today because we've got things back-asswards today.  We let Intel
    make processors that may or may not be good for anything, and then the
    programmer's job is to make Intel look good by making code that will
    actually somehow run on it.  And if you think about that, it couldn't be
    stupider.  It's completely backwards.  What you really want to do is to
    define your software system *first* — define it in the way that makes it the
    most runnable, most comprehensible — and then you want be able to build
    whatever hardware is needed, and build it in a timely fashion to run that
    software.

    And of course that's possible today with FPGA's; it was possible in the 70's
    at Xerox Parc with microcode.  The problem in between is, when we were doing
    this stuff at Parc, we went to Intel and Motorola and pleading with them to
    put forms of microcode into the chips to allow customization and function
    for the different kinds of languages that were going to have to run on the
    chips, and they said, What do you mean?  What are you talking about?
    Because it never occurred to them.  It still hasn't.
https://news.ycombinator.com/item?id=8860722

DonHopkins on Jan 9, 2015 | prev | next [–]

I believe this is about the Lisp Microprocessor that Guy Steele created in Lynn Conway's groundbreaking 1978 MIT VLSI System Design Course:

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

My friend David Levitt is crouching down in this class photo so his big 1978 hair doesn't block Guy Steele's face:

The class photo is in two parts, left and right:

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

Here are hires images of the two halves of the chip the class made:

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

The Great Quux's Lisp Microprocessor is the big one on the left of the second image, and you can see his name "(C) 1978 GUY L STEELE JR" if you zoom in. David's project is in the lower right corner of the first image, and you can see his name "LEVITT" if you zoom way in.

Here is a photo of a chalkboard with status of the various projects:

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

The final sanity check before maskmaking: A wall-sized overall check plot made at Xerox PARC from Arpanet-transmitted design files, showing the student design projects merged into multiproject chip set.

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

One of the wafers just off the HP fab line containing the MIT'78 VLSI design projects: Wafers were then diced into chips, and the chips packaged and wire bonded to specific projects, which were then tested back at M.I.T.

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

Design of a LISP-based microprocessor

http://dl.acm.org/citation.cfm?id=359031

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-514.pdf

Page 22 has a map of the processor layout:

http://i.imgur.com/zwaJMQC.jpg

We present a design for a class of computers whose “instruction sets” are based on LISP. LISP, like traditional stored-program machine languages and unlike most high-level languages, conceptually stores programs and data in the same way and explicitly allows programs to be manipulated as data, and so is a suitable basis for a stored-program computer architecture. LISP differs from traditional machine languages in that the program/data storage is conceptually an unordered set of linked record structures of various sizes, rather than an ordered, indexable vector of integers or bit fields of fixed size. An instruction set can be designed for programs expressed as trees of record structures. A processor can interpret these program trees in a recursive fashion and provide automatic storage management for the record structures. We discuss a small-scale prototype VLSI microprocessor which has been designed and fabricated, containing a sufficiently complete instruction interpreter to execute small programs and a rudimentary storage allocator.

Here's a map of the projects on that chip, and a list of the people who made them and what they did:

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

1. Sandra Azoury, N. Lynn Bowen Jorge Rubenstein: Charge flow transistors (moisture sensors) integrated into digital subsystem for testing.

2. Andy Boughton, J. Dean Brock, Randy Bryant, Clement Leung: Serial data manipulator subsystem for searching and sorting data base operations.

3. Jim Cherry: Graphics memory subsystem for mirroring/rotating image data.

4. Mike Coln: Switched capacitor, serial quantizing D/A converter.

5. Steve Frank: Writeable PLA project, based on the 3-transistor ram cell.

6. Jim Frankel: Data path portion of a bit-slice microprocessor.

7. Nelson Goldikener, Scott Westbrook: Electrical test patterns for chip set.

8. Tak Hiratsuka: Subsystem for data base operations.

9. Siu Ho Lam: Autocorrelator subsystem.

10. Dave Levitt: Synchronously timed FIFO.

11. Craig Olson: Bus interface for 7-segment display data.

12. Dave Otten: Bus interfaceable real time clock/calendar.

13. Ernesto Perea: 4-Bit slice microprogram sequencer.

14. Gerald Roylance: LRU virtual memory paging subsystem.

15. Dave Shaver Multi-function smart memory.

16. Alan Snyder Associative memory.

17. Guy Steele: LISP microprocessor (LISP expression evaluator and associated memory manager; operates directly on LISP expressions stored in memory).

18. Richard Stern: Finite impulse response digital filter.

19. Runchan Yang: Armstrong type bubble sorting memory.

The following projects were completed but not quite in time for inclusion in the project set:

20. Sandra Azoury, N. Lynn Bowen, Jorge Rubenstein: In addition to project 1 above, this team completed a CRT controller project.

21. Martin Fraeman: Programmable interval clock.

22. Bob Baldwin: LCS net nametable project.

23. Moshe Bain: Programmable word generator.

24. Rae McLellan: Chaos net address matcher.

25. Robert Reynolds: Digital Subsystem to be used with project 4.

Also, Jim Clark (SGI, Netscape) was one of Lynn Conway's students, and she taught him how to make his first prototype "Geometry Engine"!

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

Just 29 days after the design deadline time at the end of the courses, packaged custom wire-bonded chips were shipped back to all the MPC79 designers. Many of these worked as planned, and the overall activity was a great success. I'll now project photos of several interesting MPC79 projects. First is one of the multiproject chips produced by students and faculty researchers at Stanford University (Fig. 5). Among these is the first prototype of the "Geometry Engine", a high performance computer graphics image-generation system, designed by Jim Clark. That project has since evolved into a very interesting architectural exploration and development project.[9]

Figure 5. Photo of MPC79 Die-Type BK (containing projects from Stanford University):

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

[...]

The text itself passed through drafts, became a manuscript, went on to become a published text. Design environments evolved from primitive CIF editors and CIF plotting software on to include all sorts of advanced symbolic layout generators and analysis aids. Some new architectural paradigms have begun to similarly evolve. An example is the series of designs produced by the OM project here at Caltech. At MIT there has been the work on evolving the LISP microprocessors [3,10]. At Stanford, Jim Clark's prototype geometry engine, done as a project for MPC79, has gone on to become the basis of a very powerful graphics processing system architecture [9], involving a later iteration of his prototype plus new work by Marc Hannah on an image memory processor [20].

[...]

For example, the early circuit extractor work done by Clark Baker [16] at MIT became very widely known because Clark made access to the program available to a number of people in the network community. From Clark's viewpoint, this further tested the program and validated the concepts involved. But Clark's use of the network made many, many people aware of what the concept was about. The extractor proved so useful that knowledge about it propagated very rapidly through the community. (Another factor may have been the clever and often bizarre error-messages that Clark's program generated when it found an error in a user's design!)

9. J. Clark, "A VLSI Geometry Processor for Graphics", Computer, Vol. 13, No. 7, July, 1980.

[...]

The above is all from Lynn Conway's fascinating web site, which includes her great book "VLSI Reminiscence" available for free:

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

These photos look very beautiful to me, and it's interesting to scroll around the hires image of the Quux's Lisp Microprocessor while looking at the map from page 22 that I linked to above. There really isn't that much too it, so even though it's the biggest one, it really isn't all that complicated, so I'd say that "SIMPLE" graffiti is not totally inappropriate. (It's microcoded, and you can actually see the rough but semi-regular "texture" of the code!)

This paper has lots more beautiful Vintage VLSI Porn, if you're into that kind of stuff like I am:

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

A full color hires image of the chip including James Clark's Geometry Engine is on page 23, model "MPC79BK", upside down in the upper right corner, "Geometry Engine (C) 1979 James Clark", with a close-up "centerfold spread" on page 27.

Is the "document chip" on page 20, model "MPC79AH", a hardware implementation of Literate Programming?

If somebody catches you looking at page 27, you can quickly flip to page 20, and tell them that you only look at Vintage VLSI Porn Magazines for the articles!

There is quite literally a Playboy Bunny logo on page 21, model "MPC79B1", so who knows what else you might find in there by zooming in and scrolling around stuff like the "infamous buffalo chip"?

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

https://web.archive.org/web/20210131033223/http://ai.eecs.um...

pjmlp
Being able to only upvote you once feels so little.

Many thanks to sharing this information with us.

This implies there's something else that's not "procedures/calls" out there, i.e. subroutines. What else is there, can we name one of them? Subdividing a big problem into a set of smaller problems is not just some specific way of programming, it's not even a specific way of factoring a system. It's universal, as in the entire Universe essentially is factored this way. We have areas of "widening" cohesion and coupling (implementation) and areas of "narrowing" cohesion and coupling (interfaces, incl. argument/return types). These occur both horizontally and vertically as we move towards smaller and smaller problems to solve. Your actual CPU works this way, itself. So I'm very curious what we're talking about and why the dismissive attitude towards procedures i.e. "cobble it together out of procedure calls". In these conversations usually people just bring up something else that's also a procedure, but we call it something else, like "message passing to object methods". That's a useful metaphor at a higher level for sure. But let's not forget a method is just a procedure that runs and does things like any other procedure.

-----------

Thanks for the great questions! I am particularly delighted by the slight incredulity expressed as to whether it's actually possible for there to be anything else.

This has been my thesis for a while, that the call/return style has become so incredibly dominant that it is now largely synonymous with programming itself and thus something that is programming but "not call/return" seems self-contradictory and non-sensical, because if programming = call/return, then that would become "programming but not programming". Clearly illogical.

However, there are actually quite a few other connectors, and call/return is really just a small but important corner of the "connector space". See for example Towards a Taxonomy of Software Connectors:

    https://dl.acm.org/doi/10.1145/337180.337201
    https://www.ics.uci.edu/~andre/ics223w2006/mehtamedvidovicphadke.pdf
But any book on software architecture will do, for example Software Architecture, Perspectives on an Emerging Discipline by Garlan and Shaw:

    https://www.amazon.com/Software-Architecture-Perspectives-Emerging-Discipline/dp/0131829572


Objective-S concentrates on 3 architectural styles and their corresponding connectors, in addition to broadening call/return to full-bore messaging:

1. Dataflow: (Unix) Pipes and Filters[1], dataflow constraints[2]

2. Data access: (In-Process) REST[3], Polymorphic Identifiers[4], Storage Combinators[4]

3. Implicit invocation: Notification Protocols[6]

Most of these papers can be accessed despite the paywall from the the Objective-S publications page: http://objective.st/Publications/ (that's an exception allowed by the ACM).

> let's not forget a method is just a procedure

Exactly! I had a good laugh when I saw Rust described as a "multi-paradigm"vprogramming language in an official Mozilla presentation, because it supports "imperative, functional and object-oriented" programming. These are all (very) slight variations of call/return.

> Subdividing a big problem into a set of smaller problems is ... universal

Absolutely! Recursive decomposition is fairly universal (and powerful). However procedural recursive decomposition is not. Except in programming, where we are only allowed to abstract using procedures. Which is both extremely weird, once you think about it, but at the same time so universal that it is extremely hard to notice it as a limitation, and even harder to think of alternatives.

The first time I really noticed this was when looking at the OCaml Lwt user-level threading library (for I/O), which had an innocuous looking passage describing CPU-bound threads as a special case. Really? I mean, yes, but no.

Anyway, I tried writing this up here, but it's just way too long, so I'll just write another blog post. Once you become aware of this issue of using call/return where it isn't really appropriate, you find it everywhere, and you notice that a LOT of the "accidental" complexity we currently suffer in development appears to be attributable to this issue.

"Code seems "large" and "complicated" for what it does" -- Alan Kay (https://youtu.be/ubaX1Smg6pY?t=113)

"One of the complaints I have is that the teams doing products that you see are far larger than they should be. And that's a failure of architecture" -- Eric Schmidt https://youtu.be/hcRxFRgNpns?t=3591

[1] https://dl.acm.org/doi/10.1145/3359619.3359748

[2] https://dl.acm.org/doi/10.1145/2889443.2889456?cid=813164912...

[3] https://link.springer.com/chapter/10.1007%2F978-1-4614-9299-...

[4] https://dl.acm.org/doi/10.1145/2508168.2508169?cid=813164912...

[5] https://dl.acm.org/doi/10.1145/3359591.3359729?cid=813164912...

[6] https://blog.metaobject.com/2018/04/notification-protocols.h...

It is hard to encapsulate what is unique about a language in a pithy one-liner. This one is the best I could come up with so far.

And yes, I am also annoyed by hyperbolic claims and titles, for example I remember a paper claiming a "Language for Cloud Services" (not sure whether they specifically mentioned AWS). Turned out to be a minimalistic FP language with rudimentary concurrency support. Applicability to cloud services was an obvious extension left to the reader. Grmph.

> Is it really true?

Yes, I believe it is true. It has taken me many years to figure this out, and I realise that it is highly non-obvious to the point of being startling and yes, seemingly over-the-top. The "About" explains it a little bit further:

> By allowing general architectures, Objective-S is the first general purpose programming language.

> What we currently call general purpose languages are actually domain specific languages for the domain of algorithms.

Maybe this should also go on the home page?

Anyway, a super short and also still pithy and incomplete justification: All current mainstream programming languages are descendants of ALGOL. The ALGOrithmic Language.

And that's not just a naming thing. I talk about this in much more detail here: "Can Programmers Escape the Gentle Tyranny of Call/Return?" (https://2020.programming-conference.org/details/salon-2020-p...)

In short, we started using procedures to both compute results, which was what we used computers for in general when we started this CS thing, and then incidentally also to organise those programs. This worked out OK, because this organisational mechanism matched what we wanted our programs to do. It largely no longer does, but the idea that we must use procedural/functional abstraction to organise our programs is so deeply and unquestioningly entrenched as to justify the term "paradigmatic".

In fact, I would argue that the fact that we unironically refer to our "algorithm DSLs" as "general purpose programming languages" shows just how deeply entrenched the call/return architectural style is.

Anyway, this architectural mismatch ( http://repository.upenn.edu/cgi/viewcontent.cgi?article=1074... ) is now pervasive, Stéphane Chatty described it in detail for the domain of interactive programs: "Programs = Programs = Data + Algorithms + Architecture: consequences for interactive software engineering". (http://dl.ifip.org/db/conf/ehci/ehci2007/Chatty07.pdf)

So yes, to answer some of the sibling comments, you obviously can use these "algorithm DSLs" to solve more general, non-algorithmic, problems, but only because of Turing completeness. And even that is based on the notion of Algorithm:

"The Church–Turing thesis conjectures that any function whose values can be computed by an algorithm can be computed by a Turing machine,..." ( https://en.wikipedia.org/wiki/Turing_completeness )

In fact on the highly theoretical front, with which I am not particularly familiar with, arguments have been made that there are computation models beyond Turing machines (https://www.researchgate.net/publication/220459029_New_Model...):

Dynamic interaction of clients and servers on the Internet, an infinite adaptation from evolutionary computation, and robots sensing and acting are some examples of areas that cannot be properly described using Turing Machines and algorithms.

Anyway, coming back to day-to-day programming, the examples abound once you are aware of the potential for mismatch. (It's almost like confirmation bias in that way: once you know about it, you can see it everywhere...) For example:

I've run into plenty of situations where a streaming approach would be faster. The complexity of it always necessitates making a slower conventional version. (wait for all the data to load into memory and the operate on it) the conventional approach is easier to debug and get working. 90% of the time, the gains from streaming aren't worth the added effort.

https://news.ycombinator.com/item?id=22225991

Why is the "conventional approach" easier to debug and get working? Because it is supported in the language. That's why it is conventional in the first place. And so we keep doing things that are not quite right, and the code piles on.

More generally, as Alan Kay put it: code seems "large" and "complicated" for what it does. (https://www.youtube.com/watch?v=ubaX1Smg6pY&t=126s).

Or Eric Schmidt, who wanted to invite the Gmail team for a chat in a meeting room, only to find out that it was 150+ people:

One of the complaints I have is that the teams doing products that you see are far larger than they should be. And that's a failure of architecture

https://www.youtube.com/watch?v=hcRxFRgNpns&t=3591s

I am sure everyone who has programmed for a while has had this gut feeling: this is far too much drudgery and detail work for what we're doing. I've had this since University, but it really hit home when I was asked to help on Wunderlist architecture. It was an ungodly amount of code for the iOS/macOS clients, and we were doing a fracking To Do List! And as Eric said, it is a problem of architecture. And it is hard to get the architecture right when there is a mismatch between the architecture that the program needs and the architecture that your programming language proscribes!

Now that's obviously not the only problem. But it is a biggy.

dahauns
> And that's not just a naming thing. I talk about this in much more detail here: "Can Programmers Escape the Gentle Tyranny of Call/Return?" (https://2020.programming-conference.org/details/salon-2020-p...)

I've read the paper and to be frank: I'm taken aback at how you brush away FRP in Chapter 9. And with an example in Java streams, of all languages/frameworks.

Having to implement Builder methods? No named components/difficult to encapsulate because lambdas??

No, what you show with that example is not the "gentle tyranny" of call/return, it's the "gentle tyranny" of Java (which OTOH I find a surprisingly fitting phrase :) ).

mpweiher
> I've read the paper

Thanks! And sorry, I guess, I am still figuring this out and so it's not as coherent as I would like.

>I'm taken aback at how you brush away FRP in Chapter 9

Sorry, page limit, I was simply out of space. I've done more thorough takedowns^H^H^H^H^H^H^H^H^H^H^Hanalyses of Rx elsewhere. (FRP is really something else, as Conal Eliot tirelessly keeps pointing out).

https://vimeo.com/168948673

Mind you, bringing dataflow to mainstream programming is a Good Thing™, and in some ways Rx accomplishes that. But the hoops you have to jump through to get there are...oh boy.

> Java streams

Hmm...I found the issues in other frameworks to be very similar.

Enjoy!

Since Alan Kay mentioned Nile, I'm gonna dump here the list of relevant resources on Nile and its derivative works that I've found over time:

Ohm, the successor to the parser (Ometa) used by Nile: https://github.com/harc/ohm

Maru, Nile's compiler/interpreter/whatever: https://www.reddit.com/r/programming/comments/157xdz/maru_is...

Nile itself: https://github.com/damelang/nile (pdf presentation in README) https://programmingmadecomplicated.wordpress.com/2017/10/11/...

Gezira, svg drawing lib on top of Nile: https://github.com/damelang/gezira/blob/master/nl/stroke.nl http://web.archive.org/web/20160310162228/http://www.vpri.or... https://www.youtube.com/watch?v=P97O8osukZ0 https://news.ycombinator.com/item?id=8085728 (check the demo video) https://www.youtube.com/watch?v=ubaX1Smg6pY (live demo on stage later on) https://github.com/damelang/gezira/tree/master/js/demos (downloadable and runnable in browser)

KScript, UI building on top of Nile: https://www.freudenbergs.de/bert/publications/Ohshima-2013-K... http://tinlizzie.org/dbjr/KSWorld.ks/

Final Viewpoint Research Institute report, where the entire system (antialiasing, font rendering, svg, etc) was rendered using Nile into Frank, the word/excel/powerpoint equivalent written completely in Nile + Gezira + KScript): http://www.vpri.org/pdf/tr2012001_steps.pdf

VPRI wiki restored from archive.org: https://web.archive.org/web/20171121105004/http://www.vpri.o...

Shadama, which the Nile author (Dan Amelang) worked on too: https://2017.splashcon.org/event/live-2017-shadama-a-particl...

Partial DOM reimplementation on top of Gezira: https://github.com/damelang/mico/blob/master/gezira/svg.js

Partial reimplementation of Gezira: https://github.com/Twinside/Rasterific

And the famous Nile Viewer itself (everything interactive): http://tinlizzie.org/dbjr/high_contrast.html https://github.com/damelang/nile/tree/master/viz/NileViewer

Commentary: http://lambda-the-ultimate.org/node/4325#comment-66502 https://fonc.vpri.narkive.com/rtIMYQdk/nile-gezira-was-re-1-... http://forum.world.st/Vector-Graphics-was-Pi-performance-fun...

Since Alan Kay himself is on Hacker News, maybe he can comment with more links I couldn't find. It's hard to fire up the live Frank environment, though understandably reproducing the working env might not have been a top goal. Maybe someone more cunning can merge these into a single executable repo or whatever so that I can stop playing archaeologist =)

The demonstration of this technique by the VPRI team was very impressive. http://www.vpri.org/pdf/tr2012001_steps.pdf Shame that team seems to have gone quiet (not dead) since then.

Here’s a video that covers the concepts in a more entertaining format. https://youtube.com/watch?v=ubaX1Smg6pY

curuinor
They turned into Dynamicland

https://dynamicland.org/

"Is it really Complex? Or did we just make it Complicated?" (Alan Kay): https://www.youtube.com/watch?v=ubaX1Smg6pY
abhishekjha
This reminds me of a comment on a book titled ‘Discrete mathematics and its application’ by Kenneth Rosen which was along the lines of how smart the author was at the same time not being able to explain the concepts to someone else in an easy manner and to be honest I had the same feeling. I understood those concepts from other texts which definitely means that it isn’t me who was was inadequate or not upto the task. I was willing to learn but the smartest of the texts in the field left me feeling stupid. Can somebody else chip in on this argument?
agumonkey
Most of science is bridging gaps. Between a person ans a phenomonon, then between others persons.

It should be told every semester or so that if you wish to understand something, don't stop until you found something that match your own idiosyncrasies.

mabbo
I TA'd a fist year University course using, I believe, that textbook during my final two years of undergrad.

The material itself is difficult to learn and harder to teach well. At my University, it seemed that the Prof had to teach the subject for 3 or 4 years before they could reasonably get half the class to pass the final exam.

At the same time, I've often felt like I would have no harder of a time teaching the subject to 11 year olds. It's just a weird topic.

bordercases
Yeah, there's unity to it and no system other than "break the problem apart, look for these properties, think hard, and match them to the identity that also has these properties".

A lot of interpretation and intellectual confidence involved, difficult things to communicate.

Alan Kay - Is it really "Complex"? Or did we just make it "Complicated"? - https://www.youtube.com/watch?v=ubaX1Smg6pY
0x445442
Anything by Kay really.
but looking at just one of their projects is antithetical to harc

from Götz' writeup:

> Simply building prototypes with prototypes would not be a smart recipe for radical engineering: once in use, prototypes tend to break; thus, a toolset of prototypes would not be a very useful toolset for developing further prototypes. Bootstrapping as a process can thus only work if we assume that it is a larger process in which “tools and techniques” are developing with social structures and local knowledge over longer periods of time.

a lot of the ideas, or prototypes, in harc are half finished and or completely abandoned

sometimes an implementation's best contribution is the ancillary knowledge gained by attempting or developing

also, i think kay's presence is less 'hero worship' and more a reminder of shared goals(o) as well as a default standin

Götz's first contribution to a project in the space was to create an animation using cutouts from multiple copies of a picture of kay that were lying around

would you call lenna 'hero worship'?(i)

(o) https://www.youtube.com/watch?v=ubaX1Smg6pY

(i) https://en.wikipedia.org/wiki/Lenna

skadamat
Yeah this is spot on. The point of this kind of research is to discover how to frame the problem (we don't really know what it is yet) and create new contexts for thinking about computing. Under the lens of the current open problems in computing, their approach may seem odd, different, not grounded in evidence, etc. But like good art, good, foundational research provides new ways to think about the field entirely.
sitkack
Prototypes increase understanding and discovery without an undo burden on making something operational. The parent comment strikes me as a, "do you even".

Is it hero worship or having a wise, hardened bad-ass on the team?

Apr 24, 2017 · 185 points, 79 comments · submitted by saturnian
goatlover
I wonder what a Smalltalk-like environment would look like if had been developed in the 2000s instead of the 1970s.

If you could marry up the advantages of text/files, live code, visual layouts, data visualization and perhaps machine learning in the future, maybe you could come up with a huge jump in productivity and being able to handle complexity.

rjeli
Mathematica gets live code, data visualization, and ML, all with a lispy+tacit+functional syntax. I find it much more integrated and easy to use compared to Jupyter notebook, although it's near useless for anything imperative -- I find myself using Jupyter a lot these days to develop one-off scripts interactively.

(please, no one mention stephen)

TeMPOraL
> (please, no one mention stephen)

Why? Does he come to every place that mentions his name, like that AI guy?

rjeli
Because any HN discussion that mentions the W-word will inevitably devolve into discussion of the controversial man
505
(I am pleased you brought up Stephen W, and also that you asked us not to. That's all you'll get from me.)
hardlianotion
I would have called him the W-ster and respect parent's wishes
stcredzero
If you could marry up the advantages of text/files

Is that really such an advantage? What kind of advantage does having source code scattered in files have over Smalltalk's Change Log? The Change Log greatly simplifies having live code, and having a runtime environment where you could crash the system with a runtime change. Source code in text files complicates this. It's also a powerful development tool all by itself. What's more, it's just a clever use of a text file!

Qwertious
The way I see it, plaintext's advantage is much like the iPhone's stylus-less touchscreen - it's much more direct, and people deal with it much more intuitively as a result. Although I'm starting to think that it's more about not coupling the program and data file, and providing documentation (a comment-less plaintext XML file is often not much more useful than a binary file).
goatlover
I mean in being able to generate source code as text files at any point, or update from text files, since that's what programmers are used to, and there are so much of the tooling is built around that. You're not going to be putting an image on github.
stcredzero
I mean in being able to generate source code as text files at any point, or update from text files

It has been done many times. There was a Camp Smalltalk initiative to standardize such a mechanism back in the early 2000's. Anyone could code something up that does this for a particular dialect in a matter of minutes.

derefr
How about an Erlang unikernel with its relup functionality, running under a VM with the ability to hibernate to disk? That gives you nearly the same set of benefits as Smalltalk, without being nearly as "fossilized."
jacquesm
But with a huge barrier to entry. Smalltalk is at least reasonably easy to grasp for people new to programming, Erlang not so much (though it is incredibly powerful).

The graphical nature of the Smalltalk environment also really helped to make it accessible. Erlang lives mostly in text terminals.

I'm still not convinced of the 'image' mechanism, it's really nice to have implicit and automatic persistence but it glues the code so strongly to the data that it starts to hamper collaboration. Being able to easily pull a bunch of stuff from one machine to another and to integrate it with stuff that was already there is something that other programming languages have solved very well (together with DVCSs), Smalltalk seems a step backwards in that regard.

Though there are times I wished for an easy way to hibernate an entire session for later re-use.

defined
I dispute that Erlang has a huge barrier to entry. Although there is a larger barrier than for, say, Python or Ruby, I feel that Erlang gets an undeservedly bad rap, especially given the ROI.

I am no genius, but I was able circa 2008 to develop a significant production system in Erlang, while learning Erlang on the job over a period of 3-4 months. I had never programmed in a functional language before (C++ was my forte). In 2008, the tooling and environment surrounding Erlang was far less supportive than it is today, so the barrier now should be lower.

If you want to talk about a huge barrier to entry (for me, anyway), it's the Great Wall of Haskell. I found it a great deal harder to learn Haskell. I have only written one small utility in it and still don't claim to know the language. And that's after using Erlang for many years.

Also, these days, Elixir is reputed to lower whatever barrier to entry there is for Erlang.

gkya
> circa 2008 to develop a significant production system in Erlang

Is that whatsapp by any chance? ;)

defined
Not nearly that significant! I wish!
hvidgaard
Haskell is less about the "language" and more about the mindset. You simply have to program in a very different way, and that is hard to rewire your brain to do this. Once you learn Haskell, you will program different in other languages as well, simply because you think differently.
derefr
> Erlang lives mostly in text terminals.

People (outside of Ericsson) just haven't bothered to take much advantage of Erlang's strengths. Erlang speaks network wire protocols very efficiently, so if you want graphical Erlang sessions, you just need to write Erlang applications that act as e.g. X11 clients. Which is what things like Erlang's own Observer application does, complete with the ability to use the graphics toolkit Tk. (Or, if you like, you could expose an HTTP server serving a web-app with live updates over a websocket, like https://github.com/shinyscorpion/wobserver. Or an VNC/RDP server. It's really all equivalent in the end.)

Unlike Smalltalk where the graphical capabilities are part of the local console framebuffer, Erlang's approach allows you to interact with a running Erlang node graphically irrespective of your locality to it—which is important, because, in the modern day, your software is very likely a daemon on a headless server/cloud somewhere.

defined
> your software is very likely a daemon on a headless server/cloud somewhere

This is true of every Erlang system I have developed, so, I concur.

goatlover
Then this could work for Elixir as well, which seems to be a little more approachable for the majority of programmers.
jacquesm
After long searching I've settled on python for hardware control, number crunching and ML stuff (it's really just wrappers around C libraries and GPU kernels) and Erlang for everything else. So far no regrets. I wasn't aware of that software, thank you for the pointer!
jfoutz
I've never used it seriously, but the bit pattern pattern constructs look really sweet. I don't write a lot of protocol code anymore. But I remember doing parsing badly on 10-100 bytes to figure out what a message meant. Websocket stuff would get a lot of help from that syntax.
erikj
As far as I know, Bret Victor is looking exactly into that, check out his talk "The Future of Programming": https://www.youtube.com/watch?v=IGMiCo2Ntsc
noir_lord
Check out https://en.wikipedia.org/wiki/Oberon_(operating_system)

It never caught on but it was an interesting path not taken in terms of what an operating system could be.

billsix
Open croquet http://www.opencobalt.net/
buzzybee
I believe Red[0] is closest along to practically realizing this concept by focusing on compositions of small languages, a premise Alan Kay also worked on with the STEPS project at VPRI[1].

The main thing that stops people from beelining down this path is the sheer quantity of yak-shaving involved. We're all impatient and have near-term goals, and glue-and-iterate gets us there without having to engage in a non-linear deconstruction and analysis of what's going on.

[0] http://www.red-lang.org/ [1] http://www.vpri.org/html/writings.php

jnordwick
Trying to find a short REAL Red example (not Hello World or here's how to show an alert), and I can't seem to find one. Can you help me out? Something that would help me understand what the language is like.
jk4930
Look at Rebol 2 docs. Or try this (enough for me to get started): http://redprogramming.com/Getting%20Started.html
oofoe
Check out the REBOL examples on Rosetta code. I'm very fond of the "percent of image difference" one -- it's not large, but shows off some of the nice features like image handling and the fantastic REBOL GUI dialect. (Yes, I wrote it...)
throwaway7645
The concept of Red is heavily based off of Carl Sassenrath's Rebol, only Red is both very high level and fully capable of low level programming as well. Rebol can show you the high level things possible with Red. It truly is amazing. Even though its kind of old now, I installed Rebol recently and was blown away by how much power I got with no installation. Red will be much the same and allow you to make miniscule native binaries.
perfmode
Is it theoretically impossible to fit an interpreter for a dynamic programming language in the L1 cache of a modern chip?

(I understand there are physical constraints that prohibit super low-latency memory lookups (of unconstrained size) in 0+epsilon time (where epsilon is small))

jacquesm
> Is it theoretically impossible to fit an interpreter for a dynamic programming language in the L1 cache of a modern chip?

I'm pretty sure Chuck Moore (yes, he's still around) would be able to fit the interpreter and an entire OS into the L1 cache with room to spare. Forth technically is an interpreter.

protomyth
The amount of Forth code you could put in a small space is amazing. One of reasons a 128k Mac had Forth show up on it relatively quickly.

I remember back in the day someone was working on a Forth version of OpenStep. Weird, but there were some funky Forths then too. I so wish they had succeeded.

mikekchar
You can get a FORTH kernel in 2K words. It's also incredibly efficient for your own code since you basically have a dictionary and memory addresses. Thinking about it, in the old days dictionaries stored 8 character identifiers, which will handily fit in a 64 bit word. That means that the dictionary only needs 2 words per entry.

As you imply, the interpreter will be dwarfed by the code needed to talk to the rest of the OS. As an aside, this is why I was initially very excited about the JVM when Java first came around. Compiling down to a FORTH style language should give you pretty impressive benefits.

Virtual machines were very popular for a long time, but I'm not entirely convinced that we've really pushed the concept as far as it can go.

chriswarbo
> Is it theoretically impossible to fit an interpreter for a dynamic programming language in the L1 cache of a modern chip?

The APL, J and K languages are known for being very fast; I vaguely remember one reason being given that their programs are so terse and high-level, they're compact enough to fit in cache. Not sure if the interpreter would though.

agumonkey
Also, apl j k have low interpretation cost, so you're pushing the metal close to its limits.
kmicklas
Even fitting the interpreter in cache, there's obviously still some overhead to interpreting instructions rather than executing them directly.

Also, I suspect most of the memory related slowdown with interpreters is due to the indirections in memory representation of data/code, not the interpreter itself falling out of cache.

bajsejohannes
There was a period where Opera had the fastest javascript engine of all the browser. It was a stack based runtime like all others at the time and, as a side-effect of being developed for mobile devices, was small enough to fit on a cache line. That was the key to the performance advantage. Then came V8 and everyone changed over to JITing compilers.
mlvljr
FORTH has claimed exactly this for decades, I think :)

(or was that L2?)

may be doing just that right now in your x86 BIOS :)

https://news.ycombinator.com/item?id=8869150 for a HN discussion

erikj
Symbolics managed to fit their Lisp VM into the cache of DEC Alpha processors in the early nineties: http://pt.withy.org/publications/VLM.html
perfmode
Thanks for the information.

> We built a prototype of the emulator in C, but it quickly became obvious that we could not achieve the level of performance desired in C. Examination of code emitted by the C compiler showed it took very poor advantage of the Alpha's dual-issue capabilities. A second implementation was done in Alpha assembly language and is the basis for the current product.

First pass in C. Final, in Assembly.

Chip at the time was a first-generation DEC Alpha AXP 500 which had a 512 KB B-cache and two 8 KB caches.

https://en.wikipedia.org/wiki/DEC_Alpha

Let's say its present day and you want to fit into a 256K L2. What language toolchains are available? How far can one go with JIT?

ploxiln
I think Lua, statically compiled against musl libc, can fit in 200KiB. Not in a (32KiB) L1 cache as the grand-parent comment asked, but in L2. There's also LuaJIT, which I think is only a bit bigger, I'm not sure ...
mirimir
Maybe better: https://vimeo.com/82301919

A link to a transcript would be cool.

Edit: There's a transcript of the iPad question here: https://news.ycombinator.com/item?id=8857113

lsd5you
This is a distinction I first learned about working in france years ago. Without any real basis I wondered whether it is their general more precise use of language which made it a more obvious distinction for a french person to make. At the time they were more or less synonyms for me, but since then have become very distinct especially when talking about software!
dkarapetyan
In what sense are other languages less precise?
kmicklas
> their general more precise use of language

This isn't really true, it's just a snobby idea the French have somehow successfully convinced us of. (It goes along with the idea that they have the most "refined" culture or something).

agumonkey
I beg to differ, as a young adult I became enamored with English, for its simplicity in structure and vocabulary, I was annoyed by French redundancy and diversity and almost started to think in English.

Few years later, English feels restrictive and too simple. French aggregated many influences from centuries at a crossroad, and it seems it kept a lot in order to be able to add subtle layers of information by using particular sets of words fitting together well to propel metaphores and other succint yet precise description of the world.

Now I say that, on average, about mainstream incarnations of both En and Fr, surely you can find poetic and tailor made English wording, in France it seemed part of the culture, but recently it's been on the way out, only elders speak a bit that way.

kmicklas
> French aggregated many influences from centuries at a crossroad

This describes English perhaps more so.

> and it seems it kept a lot in order to be able to add subtle layers of information by using particular sets of words fitting together well to propel metaphores and other succint yet precise description of the world.

It's your native language, so of course you might think that (especially when added with this snobby French cultural ideal).

agumonkey
I don't appreciate your comment. I left my own native tongue for a reason, and came back to it for one too. I'm not even boasting superiority. I don't give the slightest damn if you talk with metaphores, or whatever figure of speech there is, I don't submit to cultural ideals or snobbery. It's a point of view on how one likes to communicate. And I very very rarely encountered it even when talking, reading and watching almost entirely English sources for many months.

Also how can England be more influenced by countries as an Island ? the naval conquests, the commonwealth ? I don't know history much, but it seemed to me they kept a very cohesive identity except for the not so French/English feedback loop.

nol13
dat baguette tho, mmm
gutnor
A lot of the specific term in English comes from the common French vocabulary and are still very (very) close to the common words in the French spoken today. The common vocabulary in English comes from German origin. Actually I think you can basically speak about anything using only German origin words.

In order to learn French and its vocabulary, English speaker will found a lot of similarity but from the more formal side of their vocabulary. That would lead English speaker to think French is more precise, I don't think the French have something to do with this.

That's BTW a common mistake English speaker make when evaluating some French speaker proficiency. The fact that I use rarely used words does not mean that I have a large vocabulary, it is just the opposite.

eli_gottlieb
>In order to learn French and its vocabulary, English speaker will found a lot of similarity but from the more formal side of their vocabulary. That would lead English speaker to think French is more precise, I don't think the French have something to do with this.

Worse, the French apparently teach young students to write in a way that they consider profound, and the Anglosphere considers imprecise drivel.

arkades
Do elaborate?
eli_gottlieb
I can't really go into detail much, but my wife too intensive/immersion French in her school days. As she became fluent in basic spoken and casual-written French, they taught her the French style of literary writing. She's the one who told me it's meant to be profound or deep, but comes across to her Anglo brain as vague and, well, bad at saying anything at all.
Terr_
There's also "Anglish", where non-germanic influences are replaced. A funny sample of this is Uncleftish Beholding [0], a fictional textbook entry by sci-fi writer Poul Anderson.

[0] https://groups.google.com/forum/message/raw?msg=alt.language...

faragon
The fool complicate the simple, while the wise simplify the complex.
goatlover
And evolution laughs at us.
Buge
He mentions a Microsoft Office bug that's been around since the 80s. Is there any more information about this?
matt4077
Pah, "complex" is just latin for "put together". Take it apart, divide and rule.
defined
More like, divide and be strangled by the huge web of interrelationships... :)
dkarapetyan
This is a fun one. But then again most Alan Kay talks are fun.
eternalban
Alan Kay has had about a few decades to empirically demonstrate that "we" have willfully made it complicated. I don't believe he has done so.
elviejo
I think Javascript,nodejs, electron and HTML and CSS demostrated that "we" have made things complicated.
macinjosh
This is the first thing that came to my mind while watching the video. Web development has become so overly complicated. I believe it is mostly due to working around limitations of the platform and the lack of a de facto standard of how web development should be done. In contrast for smartphone development, largely speaking, the way to do it is the way Android and Apple provide. The web has no single vendor to dictate that and what we have now is a result of that.
afarrell
I think a lot of it is a lack of documentation. This has two effects.

1) People just get started and get used to not really understanding the interface they are working with. Who hasn't felt like they were playing whack-a-mole with CSS layout? This means lack of conceptual clarity fails isn't notable.

2) Nobody ends up noticing just how complex the rules are.

It was 9 years from when I started looking for a tutorial like http://book.mixu.net/css/ to when I actually found it. That is...really quite bad.

chadcmulligan
I had the same thought, if he had a solution then he should have it by now.
eternalban
His critical error is evident in his comparative analysis that places Physics and Programming on the same level. The systems that underly natural sciences are givens. The entire kettle of soup of software complexity boils on the fact that software engineering must first create the 'terra firma' of computing. That is the root cause of the complexity in software: it lacks a physics.
chriswarbo
I think it's the other way around:

In physics, we don't know what the fundamental rules are, we can only see complicated outcomes and have to infer (guess) what the rules might be.

In computing, we know what the fundamental rules are (universal computation; whether that's turing machines, lambda calculus, sk logic, etc. they're all equivalent in power), but we have to deduce what the complicated outcomes are.

aethertron
>In computing, we know what the fundamental rules are

In a limited way. Because we're making systems that involve people. Important and relevant aspects of human nature must go far deeper than our present understanding.

nickpsecurity
There's been numerous logics and unified methods for specifying, synthesizing, or verifying software. The problem wasn't that we didn't have one. The problem is intrinsic complexity of the domain. It leaks through in all the formalisms where the formalism gets ugly if you want to automate it and it gets clean only with much manual labor.
chongli
He tackles your question during the Q&A. He (and his PhD student, Dan) talk about how so much of the effort to create 'terra firma', as you call it, is caused by the terrible hardware sold to us by Intel. He argues that much of the hardware we have is just software that's been crystallized too early. If he had a machine more like an FPGA he could build all of these abstractions in powerful DSLs right down to the metal.
blihp
He actually did lead a project which took this on: STEPS. (I think this is the last annual report on the project: http://www.vpri.org/pdf/tr2012001_steps.pdf) They did build a functional proof of concept which was significantly smaller/less complex than Smalltalk/Squeak which were predecessor projects he and his team worked on. Unfortunately, it's not based on the trinity of files, curly brackets and semicolons so it's not likely to take the mainstream computing world by storm.
dkarapetyan
Have you used OMeta? That came out ouf VPRI I believe: https://en.wikipedia.org/wiki/OMeta. It is a really nice approach to constructing parsers and interpreters. Then there is "Open, extensible object models": https://www.recurse.com/blog/65-paper-of-the-week-open-exten.... Again by the same folks.

Those are the one that come to mind when I think of simple and very powerful tools. There are many others. So I think Alan Kay and friends have demonstrated that we have made things complicated for dubious reasons.

chubot
I've read the OMeta paper, the object models paper, and most of the VPRI papers. I'm generally sympathetic to their point of view -- I hate bloated software.

But what has been built with these tools? If OMeta was really better, wouldn't we be using it to parse languages by now? They already have a successor Ohm, which is interesting, but I also think it falls short.

I'm searching desperately for a meta-language to describe my language, so I would use it if it were better.

I think they generally do the first 90%... but skip out on the remaining 90% (sic) you need to make something more generally useful and usable.

dkarapetyan
I think this is the case with all software. None of it is 100% or even 95%. This is why I've given up on learning anything language specific. If you understand the concepts then you'll be able to re-create the pieces you need in your language of choice because most of the time the other 5% or 10% is context dependent.
chubot
Yes I think you're expressing the same sentiment as Knuth, who I quoted in my blog here:

http://www.oilshell.org/blog/2016/12/27.html

I tend to agree and most of my shell is either written from scratch, or copying and adapting small piees of code that are known to work. I take ownership of them rather than using them as libraries.

That is even the case for the Python interpreter, which is the current subject of the blog.

I'm not quite sure, but I think OMeta falls even shorter than the typical piece of software. I'm not saying it's not worthwhile -- it seems like a good piece of research software and probably appropriate for their goals. But to say it generalizes is a different claim.

ejz
This is a good line!
TheAceOfHearts
I haven't watched the video yet, so please forgive the possibly premature comment... But this is something that I've found myself thinking about a lot lately. Are the things that we're currently building or maintaining truly that complicated or are we over-engineering things? I've been humbled on more than one occasion where I initially thought an enterprise-y solution was over-engineered, until all the details of the problem were explained to me.

What I wish we had was a "man" equivalent to provide every-day examples of how to use the tool "correctly" (although I'm aware there's stuff like "bro" pages), as well as another tool to explain why some tool / option even exists and how they're "expected" (by the creator / maintainers) to be used. As I've gotten into the habit of reading man pages I've become increasingly aware of how many options certain tools provide, but in many cases I really cannot fathom why those options are available or in what kind of situation they might be used.

Qwertious
With your comment on man pages, I've been thinking a lot about lately; the fact is, we completely neglect documentation, so much that its neglect resulted in it being pushed up a layer, into the browser - namely, google (and stackoverflow).

Fundamentally, we don't really do discoverability well because volunteers are fickle as hell, and commercial interests are profit-focused.

traviscj
It is too bad "bro pages" weren't just "example pages" :-/
braveo
Most things can be done in a less complicated manner, but it costs more.

Consider SAP. It's complicated to say the least, but a lot of that complexity comes from both it's generality, it's flexibility, and the quality of the solutions it solves underneath.

Any solution you write in SAP could be written in a much simpler manner using simpler tools, but doing so would and getting your solution to the quality of what you can get in SAP would be hugely expensive and take a lot of time.

In that way we subsidize each other, but in doing so, we often make things much more complicated than they strictly need to be to solve any particular problem.

Now this is a different class from things that are just shitty design. Those exist in abundance, and it's unfortunate, but that's life.

You'll enjoy these two videos from Alan Kay and Uncle Bob Martin, respectively:

"Is it really 'Complex'? Or did we just make it 'Complicated'?" - https://www.youtube.com/watch?v=ubaX1Smg6pY

"The Future of Programming" - https://www.youtube.com/watch?v=ecIWPzGEbFc&list=PLcr1-V2ySv...

That's serendipitous. I've been reading Mark's dissertation [1] for the past week, as well as Rees's paper [2] on W7 (his full O-Cap Scheme language).

A lot of great ideas. The ideas behind Mark's E represent a critical part of half the solution. I think Google et al. are going in the wrong direction: churning hundreds of millions of lines of code and yet, they're not reducing the complication. (Talk to anyone who has tried to deploy a production-ready Kubernetes cluster.)

There's a wide gap in the market for a product that makes building distributed systems straightforward. Using Alan Kay's "Is it really 'Complex'? Or did we just make it 'Complicated'?" [3], I feel it can be done in <100k LOC. Therefore, a single developer could grok the whole system in under a month.

I'm still working through the details and building prototypes. Allow me to mind-dump the basic ideas and guiding principles.

It's a fully distributed, orthogonal, concurrent, object-capability, operating system built upon Scheme. Guided by the principle of isomorphism that code is data and data is code, and therefore both can be sharded and replicated as demand dictates.

So, let me break it down piece by piece. First, it's an operating system, which means this is what you boot. You boot directly into a Scheme runtime. Second, it's fully distributed at the operating system level, which means you can spin up new machines, in a cluster, on demand, and they'll join the rest of the group autonomously. Third, it's an orthogonal operating system, which eliminates the need for a file system; the "files" are your objects, i.e., structured data, and the operating system is responsible for serializing the machine's runtime. Fourth, it's an object-capability system with real object-oriented programming based upon messaging. Finally, the concurrency is achieved similarly to E, Erlang, Go, etc.

Now, one of the more interesting parts comes in the sharding and replication of data and code, which is built into the runtime. Some parts will require declarative user specification (think Kubernetes, replicating a process that represents a web server to all machines in the cluster), and other parts will be handled autonomously by the runtime (think sharding a list that's become too big to fit on disk) using split/concatenation plans described either by the runtime or the user. For example, a user may have a custom data structure that requires special splitting.

This is all glued together by an optimizer/planner/scheduler that's meant to maximize the performance of processes running on the entire distributed system. Of course, this must be solved by an online decision algorithm similar to demand paging/the ski rental problem.

A lot of properties that you find in a lot of different distributed systems fall out of this system for "free". And by eliminating a lot of the complications that have been built up and papered over in Linux and containers and orchestration and data management, etc. you arrive at the essential complexity of what's needed to build safe, secure, distributed systems.

Heck, I'm building a web crawler right now, and I salivate at the idea of having a system like this that would allow me to just build the damn web crawler and stop messing around in the complications.

P.S. I've glossed over A LOT of the details and it probably all sounds confusing. But that's cause I'm shoulder-deep in the details and I have yet to come up and think about the communication/messaging side of the product. I'd love to talk in more detail with anyone that's interested. Really excited about this space.

[1]: http://erights.org/talks/thesis/markm-thesis.pdf

[2]: ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-1564.pdf

[3]: https://www.youtube.com/watch?v=ubaX1Smg6pY

I find this argument by Alan Kay to be the most eloquent one yet on this subject: https://youtu.be/ubaX1Smg6pY?t=5099

The whole talk it good

One of my fav tech talks ever (and I watch a lot of tech talks) is Alan Kay's "Is it really 'complex'? Or, did we just make it 'complicated'?" It addresses your question directly, but at a very, very high level.

https://m.youtube.com/watch?v=ubaX1Smg6pY

Note that the laptop he is presenting on is not running Linux/Windows/OSX and that the presentation software he is using is not OoO/PowerPoint/Keynote. Instead, it is a custom productivity suite called "Frank" developed entirely by his team, running on a custom OS, all compiled using custom languages and compilers. And, the total lines of code for everything, including the OS and compilers, is under 100k LOC.

petra
I wonder why did he chose to built demo applications, instead of a powerful and useful development tool that has strong value somewhere ?
corysama
Seems like the resources he had to work with in the VPRI project were pretty limited. It will be interesting to see what his team comes up with now that they are working with SAP and YC.

So far, I know about this: https://harc.ycr.org/project/

Hopefully, they're shooting for something like this: https://www.youtube.com/watch?v=gTAghAJcO1o

jon49
Something missing from this entire discussion is that developers have a hard time understanding what are truly best practices for their product they are creating/maintaining. It is a bold assertion to say that everyone understands all the different nuances in creating software.
devaroop
Alan just complicated his own laptop by not using which is proven to work.
fsloth
No, he made a point, which no one would have believed without his example.

The whole vertical software stack sitting on top of the hardware of a PC is generally considered a massive towering best with layers of abstractions, and armies of programmers needed to implement and maintain each layer. To say that this does not need to be so, would be taken as theoretical, impractical nonsense without any proof. Which actually would be a valid position, because doing software is so hard that generally you can't guarantee will something work without actually doing it.

So, yes, to make that point, and to have it taken seriously, he really needed such an example.

GnarfGnarf
That is absurd. You can't write an OS in 100KLOC.
dreish
In case you're not joking, MINIX is much smaller than 100KLOCs.
srpeck
Check out stats on the kOS/kparc project: https://news.ycombinator.com/item?id=9316091

There is also a more recent example of Arthur Whitney writing a C compiler in <250 lines of C. Remarkable how productive a programmer can be when he chooses not to overcomplicate.

nickpsecurity
Everything I saw in the link looks like K, not C. Do you have a link to the C compiler done in C language?
abecedarius
http://www.projectoberon.com/ is released and small enough for a book in the dead-trees format.
fsloth
I can't understand why people don't refer more often on Mr. Kays message. To be bluntly uncharitable and only half kidding, I do understand why consultants don't buy into it. Simpler systems that are less fragile mean less work.
cookiecaper
Employees and management don't buy into it for the same reason that consultants don't buy into it. It means less work. Less opportunity to sound smart, seize control, and/or ego stroke. Less variety to break up the work-week's monotony.
freehunter
Engineers don't buy into it because it's not cool. Complex systems are cool. It goes back to the phrase "well-oiled machine". Swiss clocks. People standing around a classic car with its hood open. A complex system of things working just perfectly is super cool, and fixing them when they break is a popular pastime.
cookiecaper
Yeah, this is what I'm getting at with the variety thing. I think that good talent tempers this tinkering impulse when a potential breakage could imperil production. They learn that as fun as complexity can be in the right context, having to lose a weekend staying awake until 5am on Saturday night/Sunday morning trying to fix something stupid cancels it out.

Having a lab and doing experimental stuff is great, but choosing to stake your company's products on it should be a much weightier consideration. In practice, we see that this weight is apparently not felt by many.

brwr
Ego is one of the larger problems I've seen over the years. This usually shows, as you mention, with someone trying to sound smart. The irony here is that I've consistently found -- both inside and outside the software industry -- that the smartest people in the room are the ones who can speak about complex topics in a simple way.
cookiecaper
It's a dilemma, because "it takes one to know one". While a few smart people in the workplace may be able to appreciate a brilliant dilution of an extremely complex topic into something approachable, most will not understand the starting complexity and just assume it's an approachable topic.

This is fine and everything, but it's bad self-promotion. If you want your bosses to give you a raise, you need them to think that you have a unique, difficult-to-acquire skillset and that it's worth going to lengths to keep you happy.

Unfortunately, modest behavior rarely results in recognition. Bombast is a very effective tool, and at some level, you always have to compete against someone.

brwr
Well, it depends. I personally don't feel the need to self-promote to get a raise. I'll probably lose out on a few raises or promotions because of that, but I make a good amount of money and I'm good at what I do. That's enough for me.
cookiecaper
This is a fine position to take, but it demonstrates one of our pervasive social problems. People with the humility, modesty, and judgment to make good decisions are frequently passed over because they don't feel the need to lead people along or "prove" their value, whereas clowns frequently realize they have nothing except the show and actively work to manipulate human biases in their favor so that they'll continue to climb the ladder. This works very well. The end result is that good people end up hamstrung by incompetent-at-best managers, and they can take down the ship.

The dilemma re-emerges as one asks himself whether it is right to sit by and allow the dangerously incompetent to ascend based on mind games.

My answer used to be "Yeah, I'll just go to a place where that doesn't happen". I no longer believe such places exist.

Dec 08, 2016 · 2 points, 0 comments · submitted by tosh
Is it really "Complex"? Or did we just make it "Complicated"? : https://www.youtube.com/watch?v=ubaX1Smg6pY
corysama
Btw: The next iteration of that project live here: https://github.com/harc/ohm
Alan Kay is my favorite tech curmudgeon.

1) Alan Kay: Is it really "Complex"? Or did we just make it "Complicated" https://www.youtube.com/watch?v=ubaX1Smg6pY

Take note that he is not giving the talk using Window & PowerPoint, or even Linux & OpenOffice. 100% of the software on his laptop are original products of his group. Including the productivity suite, the OS, the compilers and the languages being compiled.

2) Bret Victor: The Future of Programming https://www.youtube.com/watch?v=IGMiCo2Ntsc

voltagex_
Note that Alan Kay's talk is freely (?) downloadable at https://vimeo.com/82301919 (linked in the YouTube video)
In other words: "Is it Really Complex? Or, Did We Just Make it Complicated?" https://www.youtube.com/watch?v=ubaX1Smg6pY
Big systems growing over decades are usually easy targets for drastic simplifications by building upon better abstractions.

There's a great talk by Alan Kay titled 'Is it really "Complex"? Or did we just make it "Complicated"?' [1].

He shows for example the Nile system where they replaced several 10k of vector-graphics rendering code with 450 lines. An example from a different context is the miniKanren / microKanren approach to logic programming. It's basically Prolog fitting on a single page [2].

If a similar simplification and unification could be accomplished for representations of (solid) 3D objects, that would be a great great result.

[1] https://www.youtube.com/watch?v=ubaX1Smg6pY [2] http://minikanren.org/

ddingus
Over very long time periods, we may see progress here, but inertia is huge. Take Parasolid. It's a very good kernel. Between it, Granite and the Dassault kernel, we have most of the world's geometry being handled on those kernels. Probably billions of man hours at this point.

Interoperability is basically terrible.

If it were me, I would see efforts to interoperable and infer intent, features, etc... as the most productive task. Those kernels aren't going anywhere, nor are the geometry use cases getting easier.

A refactoring of all that is a very seriously expensive, time consuming task and still that body of data, intent, automation, etc... is there to deal with.

It's hard to even think about an economically viable case.

There are some trying with primitive, open kernels, and various takes on forks of the established ones.

One vendor is dividing the kernel into well defined pieces do that other software can operate in those gaps and do so efficiently.

I've seen it from Alan Kay's talk 'Is it really "Complex"? Or did we just make it "Complicated"?' [0]

Though GH repo is not updated for more than 3 years and maybe it's just my inexperience or I am spoiled, but from lack of proper documentation I have no idea how to setup and use this language? On what platforms does it work?

Also looking in the examples, it looks somewhat APL'ish due to use of special characters. How do I type them (and do it quick) on regular laptop?

[0] https://www.youtube.com/watch?v=ubaX1Smg6pY

e12e
> Though GH repo is not updated for more than 3 years and maybe it's just my inexperience or I am spoiled, but from lack of proper documentation I have no idea how to setup and use this language? On what platforms does it work?

Looks very interesting, it's a bit sad to see things like:

https://github.com/damelang/nile/issues/4

Mostly hanging from this summer. I would love it if more of these code dumps from research projects at least included a section on how to get the examples running (and something on requirements, such as which OS it has been tested on, even if that turns out to be "Only tested on DOS 2.1").

TuringTest
Thanks for the link. I had problems understanding what Nile was all about from reading the slides, which provide very little in terms of context and explanations.

Watching that talk and hearing that Nile is a declarative streaming language from the research group where Bret Victor works, I now understand what are the practical applications of such language.

sparkie
Here's a more interesting demo of Gezira: http://tinlizzie.org/~bert/Gezira.ogv
sparkie
Alan Kay talks about Gezira, a library built with Nile (and the motivation for making Nile, althoug Nile is much more general purpose). There's a JS version of Nile/Gezira which was used for Brett Victor's demonstrations, which can be found here [1]

The main Nile compiler here is written in Maru, a programming language of Ian Piumata's which is another awesome project on it's own. It's essentially a Lisp where you can define how the evaluator behaves for a given type.[2] The version of Maru used is included in the Nile repo though. It can be built with GCC or MinGW. The compiler translates Nile to either Maru or C code.

There's a squeak version which can be found at [3]

And also a Haskell vector library which exposes a similar interface to Cairo, but is modeled as Gezira under the hood (used for purely functional image generation). Although this implementation is function based and doesn't really provide a proper streaming abstraction as provided by Nile. [4]

[1]: https://github.com/damelang/gezira/tree/master/js

[2]: http://piumarta.com/software/maru/

[3]: http://tinlizzie.org/updates/exploratory/packages/

[4]: https://hackage.haskell.org/package/Rasterific

> It feels closer to the reality of how a computer operates.

In an alternate reality, high-level languages would be wired directly into our "hardware", via microcode or FPGA's or what have you. Software systems would be designed first, then the circuitry. In this alternate reality, Intel did not monopolize decades doubling down on clock speed so that we wouldn't have time to notice the von Neumann bottleneck. Apologies to Alan Kay. [0]

We should look at the "bloat" needed to implement higher-level languages as a downside of the architecture, not of the languages. The model of computing that we've inherited is just one model, and while it may be conceptually close to an abstract Turing machine, it's very far from most things that we actually do. We should not romanticize instruction sets; they are an implementation detail.

I'm with you in the spirit of minimalism. But that's the point: if hardware vendors were not so monomaniacally focused on their way of doing things, we might not need so many adapter layers, and the pain that goes with them.

[0] https://www.youtube.com/watch?v=ubaX1Smg6pY&t=8m9s

david_ar
Exactly. If the world had standardised on something like the Reduceron [1] instead, what we currently consider "low-level" languages would probably look rather alien.

[1] https://www.cs.york.ac.uk/fp/reduceron/

dalke
Don't we have cases of this alternate reality in our own reality? Quoting from the Wikipedia article on the Alpha processor:

> Another study was started to see if a new RISC architecture could be defined that could directly support the VMS operating system. The new design used most of the basic PRISM concepts, but was re-tuned to allow VMS and VMS programs to run at reasonable speed with no conversion at all.

That sounds like designing the software system first, then the circuitry.

Further, I remember reading an article about how the Alpha was also tuned to make C (or was it C++?) code faster, using a large, existing code base.

It's not on-the-fly optimization, via microcode or FPGA, but it is a 'or what have you', no?

There are also a large number of Java processors, listed at https://en.wikipedia.org/wiki/Java_processor . https://en.wikipedia.org/wiki/Java_Optimized_Processor is one which works on an FPGA.

In general, and I know little about hardware design, isn't your proposed method worse than software/hardware codesign, which has been around for decades? That is, a feature of a high-level language might be very expensive to implement in hardware, while a slightly different language, with equal expressive power, be much easier. Using your method, there's no way for that feedback to influence the high-level design.

gavinpc
I just wanted to thank you (belatedly) for a thoughtful reply. The truth is, I don't know anything about hardware and have just been on an Alan Kay binge. But Alan Kay is a researcher and doesn't seem to care as much about commodity hardware, which I do. So I don't mean to propose that an entire high-level language (even Lisp) be baked into the hardware. But I do think that we could use some higher-level primitives -- the kind that tend to get implemented by nearly all languages. Or even something like "worlds" [0], which as David Nolen notes [1, 2] is closely related to persistent data structures.

Basically (again, knowing nothing about this), I assume that there's a better balance to be struck between the things that hardware vendors have already mastered (viz, pipelines and caches) and the things that compilers and runtimes work strenuously to simulate on those platforms (garbage collection, abstractions of any kind, etc).

My naive take is that this whole "pivot" from clock speed to more cores is just a way of buying time. This quad-core laptop rarely uses more than one core. It's very noticeable when a program is actually parallelized (because I track the CPU usage obsessively). So there's obviously a huge gap between the concurrency primitives afforded by the hardware and those used by the software. Still, I think that they will meet in the middle, and it'll be something less "incremental" than multicore, which is just more-of-the-same.

[0] http://www.vpri.org/pdf/rn2008001_worlds.pdf

[1] https://www.recurse.com/blog/55-paper-of-the-week-worlds-con...

[2] https://twitter.com/swannodette/status/421347385915498496

davorb
Don't forget LISP machines! I think they might be the perfect example of what he was referring to.

https://en.wikipedia.org/wiki/Lisp_machine

dalke
Indeed!
While this talk could be said to be a talk where he points out problems but don’t show solutions.

Anyway he has a talk here which at least show some more recent examples of what he considers a better way: Is it really "Complex"? Or did we just make it "Complicated"?: https://www.youtube.com/watch?v=ubaX1Smg6pY

In this he talks about two programming languages: Nile and OMeta which allows them to solve some problems in very few LOCs (among other things).

Mar 15, 2015 · delish on The Church of TED
Alan Kay has an apropos quote[0]:

> This [The fact that science is a human-driven, human-invented process] is hard to explain to K-8 science teachers, who think that science is a new religion with new truths to be learned. They think it's their job to dispense these catechisms. [emphasis added]

I think viewers of TED talks are looking to be told what to believe. I'm not at all moralizing--I sometimes watch TED talks--but I do think science-as-religion (a bunch of facts or truths to be memorized or internalized) is a distortion that "opiates the people," so to speak.

[0] Is it really complex? Or did we just make it complicated? https://www.youtube.com/watch?v=ubaX1Smg6pY

None
None
dbenhur
But I Fuckin' Love Science https://www.facebook.com/IFeakingLoveScience
bloaf
I think that in science there is an inherent tension between what scientists know and what what the public knows.

Scientific knowledge is covered in asterisks. To read and understand science typically requires an extensive amount of background knowledge; without that it is very easy to misinterpret the strength, significance, or applicability of a scientific finding. Many of the asterisks are themselves scientific conclusions with their own set of asterisks.

The public, at least in areas which are not immediately relevant to daily life, cannot be assumed to have that background knowledge. Therefore, the tension is essentially this: How can scientists make people aware of (or interested in) scientific knowledge if they have to strip out all the asterisks when talking to them?

Making science into a catechism is simply one solution to the above problem. Ignore the asterisks and turn the fundamental findings of physics and chemistry into dogma and dispense it as gospel truth. I actually am somewhat o.k. with this. Sure, Joe Public might come away with an over-simplified and over-confident knowledge about what science says, but I think that a clumsy knowledge of scientific facts is better than the impression that scientific knowledge is arcane and out of reach. After all, it is highly unlikely that Joey P. will find himself in a situation where it is very important for him to really understand all the subtleties behind his k-8 scientific knowledge, but that knowledge is likely to come in handy.

The only problems with the science-as-religion solution are a potential "loss of faith" and the perception of arbitrarity with regards to scientific findings. The first can easily happen when someone learns about a new finding which contradicts the dogmatic version of science but is entirely consistent with the heavily asterisked actual scientific consensus. The second problem is basically "rejectability;" if people think that science is a set of arbitrary rules, then they are free to reject them the same way they would reject other religions.

So how many bugs remain?

Mostly rhetorical question, but can any extrapolation be done? If you go back five years, can any of those numbers correlate to the findings since? Do any metrics such as cyclomatic complexity, #defects/kLoC[1][2], unit tests or code coverage help?

In most cases the definition of "defect" is not well-defined, nor in many cases easily comparable (e.g., a typo in a debug message compared to handling SSL flags wrong). Is is a requirements or documentation bug: the specification to the the implementer was not sufficiently clear or was ambiguous. Also, when do we start counting defects? If I misspelled a keyword and the compiler flagged it, does that count? Only after the code is commited? Caught by QA? Or after it is deployed or released in a product?

Is it related to the programming language? Programmer skill level and fluency with language/libraries/tools? Did they not get enough sleep the night before when they coded that section? Or were they deep in thought thinking about 4 edges cases for this method when someone popped their head in to ask about lunch plans and knocked one of them out? Does faster coding == more "productive" programmer == more defects long term?

I'm not sure if we're still programming cavemen or have created paleolithic programming tools yet[3][4].

p.s.: satisified user of cURL since at least 1998!

    [1] http://www.infoq.com/news/2012/03/Defects-Open-Source-Commercial
    [2] http://programmers.stackexchange.com/questions/185660/is-the-average-number-of-bugs-per-loc-the-same-for-different-programming-languag
    [3] https://vimeo.com/9270320 - Greg Wilson - What We Actually Know About Software Development, and Why We Believe It's True
    (probably shorter, more recent talks exists (links appreciated))
    [4] https://www.youtube.com/watch?v=ubaX1Smg6pY - Alan Kay - Is it really "Complex"? Or did we just make it "Complicated"?
    (tangentially about software engineering, but eye-opening for how much more they were doing, and with fewer lines of code) (also, any of his talks)
Alan Kay gave a talk that was recently discussed here on Hacker News titled: Is it really "Complex"? Or did we just make it "Complicated"?

https://www.youtube.com/watch?v=ubaX1Smg6pY&=

Someone asked Alan Kay an excellent question about the iPad, and his answer is so interesting, and reveals something very surprising about Steve Jobs losing control of Apple near the end of his life, that I'll transcribe here.

To his credit, he handled the questioner's faux pas much more gracefully than how RMS typically responds to questions about Linux and Open Source. ;)

Questioner: So you came up with the DynaPad --

Alan Kay: DynaBook.

Questioner: DynaBook!

Yes, I'm sorry. Which is mostly -- you know, we've got iPads and all these tablet computers now. But does it tick you off that we can't even run Squeak on it now?

Alan Kay: Well, you can...

Q: Yea, but you've got to pay Apple $100 bucks just to get a developer's license.

Alan Kay: Well, there's a variety of things.

See, I'll tell you what does tick me off, though.

Basically two things.

The number one thing is, yeah, you can run Squeak, and you can run the eToys version of Squeak on it, so children can do things.

But Apple absolutely forbids any child from putting a creation of theirs to the internet, and forbids any other child in the world from downloading that creation.

That couldn't be any more anti-personal-computing if you tried.

That's what ticks me off.

Then the lesser thing is that the user interface on the iPad is so bad.

Because they went for the lowest common denominator.

I actually have a nice slide for that, which shows a two-year-old kid using an iPad, and an 85-year-old lady using an iPad. And then the next thing shows both of them in walkers.

Because that's what Apple has catered to: they've catered to the absolute extreme.

But in between, people, you know, when you're two or three, you start using crayons, you start using tools. And yeah, you can buy a capacitive pen for the iPad, but where do you put it?

So there's no place on the iPad for putting that capacitive pen.

So Apple, in spite of the fact of making a pretty good touch sensitive surface, absolutely has no thought of selling to anybody who wants to learn something on it.

And again, who cares?

There's nothing wrong with having something that is brain dead, and only shows ordinary media.

The problem is that people don't know it's brain dead.

And so it's actually replacing computers that can actually do more for children.

And to me, that's anti-ethical.

My favorite story in the Bible is the one of Esau.

Esau came back from hunting, and his brother Joseph was cooking up a pot of soup.

And Esau said "I'm hungry, I'd like a cup of soup."

And Joseph said "Well, I'll give it to you for your birth right."

And Esau was hungry, so he said "OK".

That's humanity.

Because we're constantly giving up what's most important just for mere convenience, and not realizing what the actual cost is.

So you could blame the schools.

I really blame Apple, because they know what they're doing.

And I blame the schools because they haven't taken the trouble to know what they're doing over the last 30 years.

But I blame Apple more for that.

I spent a lot of -- just to get things like Squeak running, and other systems like Scratch running on it, took many phone calls between me and Steve, before he died.

I spent -- you know, he and I used to talk on the phone about once a month, and I spent a long -- and it was clear that he was not in control of the company any more.

So he got one little lightning bolt down to allow people to put interpreters on, but not enough to allow interpretations to be shared over the internet.

So people do crazy things like attaching things into mail.

But that's not the same as finding something via search in a web browser.

So I think it's just completely messed up.

You know, it's the world that we're in.

It's a consumer world where the consumers are thought of as serfs, and only good enough to provide money.

Not good enough to learn anything worthwhile.

Alan Kay talks about a text justification bug in MicroSoft Word that's been around for 30 years. [1] PhotoShop is many many millions of lines of code, making an API that covers all of it is unlikely, and at best it is practical to make some subset of its procedures available from a command line.

In contrast, an AutoCad drawing is serializable to a text file of AutoCad commands.

At the core of the software architecture, it's analogous to REST versus SOAP soaked in kerosene with three decades of software evolution piled on top. Nobody loves AutoCad's DXF with deep abiding passion, but nobody would take "DXF is not my favorite file format" [2] rants in the source code seriously.

The point from AutoCad's <Point:> command is the same point that everything else is built on. It's bottom up. So in the end, a full blown command language for Photoshop would need a <Pixel:> command. And everything else, all the way up, would need to rely on calls to <:Pixel>. That's the level of granularity required to implement a command API like AutoCad's.

That's the joy of AutoCad's command line. It's programming with a REPL. Photoshop doesn't have one at its core.

[1]: https://www.youtube.com/watch?v=ubaX1Smg6pY&=

[2]: https://code.google.com/p/xee/source/browse/XeePhotoshopLoad...

I just watched Alan Kay's talk about the same research:

https://www.youtube.com/watch?v=ubaX1Smg6pY

I'm happy to see this blog post today because it saves me from writing the same thing. Prototypes can be beautifully simple in ways that systems with a bunch of customers and a long legacy often can't...

gavinpc
I transcribed a part of that talk for another thread, where he mentions the 10KLOC system from Parc:

https://news.ycombinator.com/item?id=8860680

Here is Alan Kay from a talk that was just on HN today.

https://www.youtube.com/watch?v=ubaX1Smg6pY&t=8m9s

    The amount of complication can be hundreds of times more than the
    complexity, maybe thousands of times more.  This is why appealing to
    personal computing is, I think, a good ploy in a talk like this because
    surely we don't think there's 120 million lines of code—of *content* in
    Microsoft's Windows — surely not — or in Microsoft Office.  It's just
    incomprehensible.
    
    And just speaking from the perspective of Xerox Parc where we had to do this
    the first time with a much smaller group — and, it's true there's more stuff
    today — but back then, we were able to do the operating system, the
    programming language, the application, and the user interface in about ten
    thousand lines of code.
    
    Now, it's true that we were able to build our own computers.  That makes a
    huge difference, because we didn't have to do the kind of optimization that
    people do today because we've got things back-asswards today.  We let Intel
    make processors that may or may not be good for anything, and then the
    programmer's job is to make Intel look good by making code that will
    actually somehow run on it.  And if you think about that, it couldn't be
    stupider.  It's completely backwards.  What you really want to do is to
    define your software system *first* — define it in the way that makes it the
    most runnable, most comprehensible — and then you want be able to build
    whatever hardware is needed, and build it in a timely fashion to run that
    software.

    And of course that's possible today with FPGA's; it was possible in the 70's
    at Xerox Parc with microcode.  The problem in between is, when we were doing
    this stuff at Parc, we went to Intel and Motorola and pleading with them to
    put forms of microcode into the chips to allow customization and function
    for the different kinds of languages that were going to have to run on the
    chips, and they said, What do you mean?  What are you talking about?
    Because it never occurred to them.  It still hasn't.
EDIT added last para.
Jan 08, 2015 · 229 points, 158 comments · submitted by mpweiher
sivers
Very interesting related talk about complecting things - "Simple Made Easy" - by Rich Hickey, inventor of Clojure:

http://www.infoq.com/presentations/Simple-Made-Easy

If you're a Ruby person, maybe watch this version instead, since it's almost the same talk but for a Rails Conference, with a few references:

https://www.youtube.com/watch?v=rI8tNMsozo0

nilkn
I think there's an unfortunate trend throughout a lot of human activity to make things complicated. The prototypical example in my mind has always been the Monty Hall problem, where you often see convoluted explanations or detailed probabilistic calculations, but the solution is just plain as it can be when you consider that if you switch you win iff you chose a goat and if you stay you win iff you chose the car.

I've also always been a fan of Feynman's assertion that we haven't really understood something until it can be explained in a freshman lecture. At the end of his life he gave a now famous explanation of QED that required almost no mathematics and yet was in fact quite an accurate representation of the theory.

DanBC
I thought the point of the modern Monty Hall problem is to introduce students to probability and thus it's important for them to list the various probabilities. They're learning how to do it. Other people need to check the working to see if the student understands what to do.

Here's a nice example of this teaching: http://ocw.mit.edu/courses/electrical-engineering-and-comput...

jodrellblank
car: switch and lose. Goat: switch and win. Is OK as far as it goes, but it's not enough explanation to cover why you should switch.

You need probability in the explanation because of the two goats/one car choice in part one .. and here I am trying to explain it again. You already know how it works, but you don't see that your explanation of it doesn't explain it.

After they removed one answer, you have more information...

Because the choice isn't balanced...

If it was as simple as you suggest, it wouldn't make a good part of a game show. Everyone would see straight through it.

Lost_BiomedE
It always seemed a bit easy to me without running through probabilities due to the fact that the game show host was forced to choose goat (which is different depending on where the car is). The idea of a forced non-random choice is more clear to me than a probability explanation, but you need probability to demonstrate a proof.
sbov
Your explanation makes it sound like its a 50/50 chance.
basch
there are two goats, you win if you choose either first and switch.

the problem is deceptive because it doesnt explicitly say they ALWAYS remove a goat.

The answer becomes obvious when you ask "There are three doors, one has a prize. Would you like to select one door or two doors?"

marvy
Ok, that is the best Monty Hall explanation I've ever heard.
cbd1984
> The prototypical example in my mind has always been the Monty Hall problem, where you often see convoluted explanations or detailed probabilistic calculations, but the solution is just plain as it can be when you consider that if you switch you win iff you chose a goat and if you stay you win iff you chose the car.

The problem here is, when two people disagree, they both think the problem is simple and they both think their answer is trivial. The thing is, they haven't solved the same problem.

In the standard formulation, where switching is the best strategy, Monty's actions are not random: He always knows which door has the good prize ("A new car!") and which doors have the bad prize ("A goat.") and he'll never pick the door with the good prize.

If you hear a statement of the problem, or perhaps a slight misstatement of the problem, and conclude that Monty is just as in the dark as you are on which door hides which prize and just happened to have picked one of the goat-concealing doors, then switching confers no advantage.

A large part of simplicity is knowing enough to state the problem accurately, which circles back to your paragraph about Feynman: Understanding his initial understanding of QED required understanding everything he knew which lead him to his original understanding of the problem QED solved; for his final formulation of QED, he understood the problems in terms of more fundamental concepts, and could therefore solve them using those same concepts.

nilkn
Actually, if Monty chooses randomly and just happens to pick a door with a goat you should still switch, because it's still true that you will win if you switch iff you chose a goat but you'll win if you stay iff you chose the car already. The host's method of selection is not relevant given a priori the observation that the host selected a goat. The host's method is relevant if we forgo that observation and iterate the game repeatedly.
None
None
david927
A doctor told a programmer friend once, "I have to fix problems that God gave us. You have to fix problems you give yourselves."
ploxiln
My mother, a family practitioner, sometimes gets a patient which is on 15 or so different medications, all prescribed by different specialists at different times. She whittles it down to 3 or 4, and they get a lot better.
adamnemecek
That doctor sounds a bit like an arrogant douche.
binarytrees
a tip of that hat
thirsteh
"It's not exactly brain surgery, is it?" https://www.youtube.com/watch?v=THNPmhBl-8I
_lce0
I wonder if the teams at NASA, working on rocket science says "it's not rocket science, you know"
singlow
I have a relative who works for Raytheon as a physicist on rocket systems. Whenever he tells be about a dumb mistake he had to fix, he always uses that expression.. followed by, "Oh wait, I guess it is."
lucozade
With my 2015 budget and 2015 requirements, I'm going to need a few miracles.
andrewchambers
That doctor is wrong. All human problems are caused by the constraints of our physical world.
dredmorbius
Especially gross oversimplifications of reality and absolute statements.
None
None
0xdeadbeefbabe
That's why God endows them with greater analytical prowess.
navan
In a way both are fixing bugs created by someone else.
a_dutch_guy
God on HN? Come on, the world isn't flat anymore.
marrs
Intolerance on HN? Apparently so.
justathrow2k
Oh, look, someone who has proven that God doesn't exist. Please, post the proof for all of us to see.
a_dutch_guy
I would rather turn it around. You prove God exist.
jodrellblank
You'll just have to take it on Faith.
justathrow2k
I'd agree, it takes faith to know that there was no creator of the world, the same faith it takes to know there was. Just to clarify, I'm not arguing there's proof that 'God' does exist, I'm arguing there's no proof that 'God' doesn't exist. The original comparison falls short, the world can be proven to be round, not flat.

edit: a thanks in advance to all you enlightened fervent atheists for the down votes.

None
None
cbd1984
Atheism isn't having faith there is no creator, it's having the intellectual curiosity to doubt the existence of a creator.

Edited to be more honest.

bbcbasic
I think you mean agnostic not atheism then.
cbd1984
No. I'm atheist, and that is my worldview. Saying I'm wrong about my own philosophy is nonsensical.

I'm also not alone: My definition of the word seems to be the most common one among the current atheist movement.

cbd1984
Agnosticism is a belief that we can't possibly know whether there's a supernatural. I don't hold that view. My view is a skepticism of the supernatural, which is atheism.
justathrow2k
It only takes courage for those who live in areas that are indoctrinated with dense religious ideology's who will suffer consequences for their actions. For most in the western world, it is not such a hard thing to identify oneself with.

edit: original response claimed it took courage to doubt the existence of a creator. It has since edited out to claim it simply takes intellectual curiosity. I would argue that the same intellectual curiosity could have the end result of someone having a belief in a singular creator of reality.

cbd1984
> I would argue that the same intellectual curiosity could have the end result of someone having a belief in a singular creator of reality.

Possible, but not likely, given how many intellectually curious people have independently reinvented atheism but no religion has been independently reinvented.

marrs
It's far more likely to be the other way round. Atheism was only really formed about 300 years ago, and by that time the world had been explored and the printing press had been invented. In contrast, all of the ancient religions, formed in completely disconnected continents, had the notion of a creator at their centre.
cbd1984
> Atheism was only really formed about 300 years ago

This denies the existence of Ancient Greek atheists, and people who were skeptical of religion in general before the modern era.

justathrow2k
Religion and belief in a singular creator are not the same thing. Furthermore, I don't really believe you have any credible way of producing a valid number of 'intellectually curious' individuals who have made the choice to believe in a singular creator versus not.

edit: misunderstanding of mutually exclusive and added some more words.

cbd1984
Just to clarify, atheism isn't a faith claim, it's a skepticism of faith claims.
None
None
justathrow2k
Just to clarify, I never said atheism required faith. Please, read the comment you responded to.
cbd1984
That isn't how I, or any atheists I know, define atheism.
cbd1984
I don't know why people would downvote this.
jodrellblank
Possibly because you missed the sarcasm/joke nature of my comment and gave a serious reply. I wasn't defining atheism as faith in no-God.

I was digging at the idea that someone would accept a claim on Faith, then demand evidence before giving up the idea, and how that is inconsistent. If someone can faith into an idea without evidence it should be equally easy to faith out of it with no evidence.

That it's not easy, is interesting.

Nothing -> belief in God, costs little and gains much.

Belief in God -> no belief, costs much and gains little.

Where by costs and gains, I include social acceptance, eternal life, favourable relations with a powerful being, forgiveness of sins, approved life habits, connections to history and social groups, etc.

It's not a Boolean. Or at least, from inside a mind it doesn't feel like a Boolean.

justathrow2k
I feel like its important to point out that having faith in a creator is not exclusive to having a belief in things like eternal life, favourable relations with a powerful being, etc. etc. People seem to easily conflate faith in a creator with practicing a mainstream religion.

edit: I would also like to add, although I already said it, that I believe both of these opinions regarding the creator are acts of faith, to absolutely say there is not one and to absolutely say there is, and if someone where to have made a statement as if a creators existence was a sure thing I would have called them out just the same.

justathrow2k
Just to clarify, the original response was to the original posters claim that God does not exist. My request for him to present his proof to me was not meant as an open attack on atheism or even a critique of it, I was simply speaking to and about one persons beliefs. In any event, I begin to suspect I should have said nothing with this being such an emotional issue for people.
jacquesm
You don't have to take that literally in order to understand it, but just in case you really missed the point: the doctor is essentially saying that he is dealing with problems that have no direct cause or architect that he can consult whereas the IT people have essentially only themselves to blame and have made the mess they're in (usually) rather than that it has been handed down to them through the mists of time or from above.

In a general sense that is true but in a more specific sense it is definitely possible to be handed a project without docs and horrible problems and it might as well be an organic problem. Even so, the simplest biology dwarfs the largest IT systems in complexity.

iamcreasy
> Even so, the simplest biology dwarfs the largest IT systems in complexity.

Well said.

a_dutch_guy
Of course I am not taking it literally. But he does mention God.

Being a technical guy Stephen Hawking makes much more sense to me than God.

dasil003
> Even so, the simplest biology dwarfs the largest IT systems in complexity.

And also mercifully in continuous real-world testing over time with no Big Rewrite to be seen. While an individual organism might have unsolvable problems, nature doesn't have the same tolerance for utterly intractable systems that we sometimes see in the software community.

0xdeadbeefbabe
My doctor for one, would benefit most from fixing his own basic problems than from dwelling on God or what problems God has given us, but like most doctors he pretends to solve big problems while saying pretentious things.

I'm just saying you'd expect these doctor types to have the greater powers of analysis, but in my experience they just pretend harder.

[Edit to remove that off putting stuff]

None
None
randlet
To be fair, doctors are often fixing problems we've created ourselves too.
jaredsohn
But that's different than problems created by the doctor.

Edit: granted, this still can happens when self-medicating. The general case of doctors fixing the mistakes of doctors is different than what the parent suggests.

jfb
Not always.
None
None
ad_hominem
See also "No Silver Bullet — Essence and Accidents of Software Engineering"[1] by Fred Brooks wherein "essential complexity" vs "accidental complexity" is explored.

[1]: http://en.wikipedia.org/wiki/No_Silver_Bullet

TheBiv
I have made it to 31 min in, and I genuinely think he made this talk too complicated.
bbcbasic
Ha ha I skipped through until he started talking about code again. Go back and do that!
PuercoPop
This appears to be a presentation of part of the work of VPRI[0], you should check it out, they made a whole OS and the (word processor, PDF viewer, Web Browser, etc) in under 20K lines of code. There is more info on: http://www.vpri.org/vp_wiki/index.php/Main_Page

Besides Nile, the parser OMeta is pretty cool to imho.

renox
> you should check it out, they made a whole OS and the (word processor, PDF viewer, Web Browser, etc) in under 20K lines of code.

Check it out? Where is the source code for the OS? I haven't found it in the link you gave..

3minus1
Alan Kaye seems to enjoy making asinine statements. "Teachers are taught science as a religion" "Romans had the best cement ever made" "We live in the century where the great art forms are the merging of science and engineering"

There's also a kind of arrogance when he shits on intel and djikstra that I find off-putting.

david927
Kay, not Kaye. Roman concrete is universally considered superior to modern concrete.
3minus1
Somehow I'm not surprised that this is what turns up when I google Roman concrete:

> “Roman concrete is . . . considerably weaker than modern concretes. It’s approximately ten times weaker,” says Renato Perucchio, a mechanical engineer at the University of Rochester in New York. “What this material is assumed to have is phenomenal resistance over time.” http://www.smithsonianmag.com/history/the-secrets-of-ancient...

anfedorov
Only familiar with one of them, but "teachers are taught science as a religion" seems spot-on.
3minus1
I disagree. Teaching science as a series of facts is in no way akin to religious indoctrination.

Edit: And Kaye is also ignoring the fact that every high school science curriculum covers the scientific method and includes hands on experimentation.

e12e
You've never seen how students too often get better grades on "perfect" science experiments than on honest reports of "failures" (experment differed from expected outcome)?

You must've been in an elite high school.

3minus1
That actually did happen to me sometimes. I don't think you can convince me that there's anything religious about it.
m_mueller
This talk blows everything out of the water. It's like the current culmination of the work from Douglas Engelbart to Bret Victor. Maybe just maybe there could be a rebirth of personal computing coming out of this research, it looks at least promising and ready for use - as demonstrated by Kay.
mpweiher
>It's like the current culmination of the work from Douglas Engelbart to Bret Victor.

This is no coincidence.

"Silicon Valley will be in trouble when it runs out of Doug Engelbarts ideas" -- Alan Kay

Bret works for Alan now, as far as I know.

m_mueller
I've noticed that in his talk. Easily looks like the most prestigious CS R&D lab in the world right now.
serve_yay
Well, we made it complicated because one must complicate pure things to make them real. I guess things would be better if our code ran on some hypothetically better CPU than what Intel makes. But "actually existing in the world" is a valuable property too.

Plato was pretty cool, though, I agree.

ozten
So the "frank" system, which he uses throughout the presentation is real enough to browse the internet and author active essays in. It is written in 1000x less code than Powerpoint running on Windows 7.
WoodenChair
People in this thread who seem to be asserting that building for the Web as opposed to native apps is an example of how things should be "less complex" perhaps have never written for both platforms... Also the entire argument completely misses the point of the lecture.
ankurdhama
I think those people are just suggesting that we should be writing apps and NOT android app, ios app, windows app, linux app etc etc.
rimantas
we will do that once there is just OS, not android, ios, windows, linux, etc. And no, web does not fit as a substitute for common OS.
samspot
15 minutes in, I am finding this talk to be slow and unfocused. I assume this is getting a lot of promoters because we all agree with the theme?
bbcbasic
Seeing the 1:42:51 length scared me at first. That's going to take me days to get through given how much free time I have per day! Anyway best bit is to skim through bits of it. It is all interesting and good but I do prefer it when he talks about code.
yazaddaruvala
P.S. In Chrome I can play almost all Youtube videos at 2x speed. Most presentations are still very understandable. I've been doing this for a while, maybe start at 1.5x.

Anyways, just a tip. Its saved me hours and keeps slow speakers from getting boring.

dredmorbius
This is where I find 1) downloading the video and 2) playing it back using a local media player to be vastly superior to viewing online. I can squeeze the window down to a small size (400px presently), speed it up (150%), start and stop it at will, zoom to full screen, and _not_ have to figure out where in a stack of browser windows and tabs it is.

Pretty much the point I was making some time back where I got in a tiff with a documentary creator for D/L'ing his video from Vimeo (its browser player has pants UI/UX in terms of scanning back and forth -- the site switches videos rather than advances/retreats through the actual video you're watching).

And then as now I maintain: users control presentation, once you've released your work you're letting your baby fly. Insisting that people use it only how you see fit is infantile itself, and ultimately futile.

(Fitting with my Vanity theme above).

htor
This video is very interesting.

Very good point about the manufacturing of processors being backwards. Why _not_ put an interpreter inside the L1 cache that accomodates higher level programming of the computer? Why are we still stuck with assembly as the interface to the CPU?

frik
We had already Lisp running directly on a CPU: http://en.wikipedia.org/wiki/Lisp_machine

Java runs directly on hardware: http://en.wikipedia.org/wiki/Java_processor , http://en.wikipedia.org/wiki/Java_Card

Pascal MicroEngine: http://en.wikipedia.org/wiki/Pascal_MicroEngine

And some more: http://en.wikipedia.org/wiki/Category:High-level_language_co...

And the x86 instruction set is CISC, but (modern) x86 architecture is RISC (inside) - so your modern Intel/AMD CPU already does it. http://stackoverflow.com/questions/13071221/is-x86-risc-or-c...

Backward compatibility aside, we could have CPUs that support a ANSI C or the upcoming Rust directly in hardware, instead of ASM.

vezzy-fnord
Don't forget the Burroughs B5000, from 1961: https://en.wikipedia.org/wiki/Burroughs_large_systems#B5000

The fact that we had an architecture running entirely on ALGOL with no ASM as far back as 54 years ago, is quite astonishing, followed by depressing.

Animats
I know. I once took a computer architecture course from Bill McKeeman at UCSC, and we had an obsolete B5500 to play with. We got to step through instruction execution from the front panel, watching operands push and pop off the stack. The top two locations on the stack were hardware registers to speed things up.

An address on the Burroughs machines is a path - something like "Process 22 / function 15 / array 3 / offset 14. The actual memory address is no more visible to the program than an actual disk address is visible in a modern file system. Each of those elements is pageable, and arrays can be grown. The OS controls the memory mapping tree.

One of the things that killed those machines was the rise of C and UNIX. C and UNIX assume a flat address space. The Burroughs machines don't support C-style pointers.

e12e
Don't forget transmeta either. Unfortunately they didn't succeed either. Did employ Linus Torvalds for a while though.
valarauca1
This should answer your question. Albeit satirically but the context is very solid [1]. Basically like most forms of engineering, making processors is hard.

[1] https://www.usenix.org/system/files/1309_14-17_mickens.pdf

Animats
Because when DEC did that in the VAX 11/780, nobody used it. Instructions could be added to the microcode, which was loaded at boot time from an 8" floppy. The VAX 11/780 machine language was designed to allow clean expression of program concepts. There's integer overflow checking, a clean CALL instruction, MULTICS-style rings of protection, and very general addressing forms. It's a very pleasant machine to program in assembler.

Unfortunately, it was slow, at 1 MIPS, and had an overly complex CPU that took 12 large boards. (It was late for that; by the time VAXen shipped in volume, the Motorola 68000 was out.) The CALL instruction was microcoded, and it took longer for it to do all the steps and register saving than a call written without using that instruction. Nobody ever added to the microcode to implement new features, except experimentally. DEC's own VMS OS used the rings of protection, but UNIX did not. One of the results of UNIX was the triumph of vanilla hardware. UNIX can't do much with exotic hardware, so there's not much point in building it.

There's a long history of exotic CPUs. Machines for FORTH, LISP, and BASIC (http://www.classiccmp.org/cini/pdf/NatSemi/INS8073_DataSheet...) have been built. None were very successful. NS's cheap BASIC chip was used for little control systems at the appliance level, and is probably the most successful one.

There are things we could do in hardware. Better support for domain crossing (not just user to kernel, but user to middleware). Standardized I/O channels with memory protection between device and memory. Integer overflow detection. Better atomic primitives for multiprocessors. But those features would just be added as new supported items for existing operating systems, not as an architectural base for new one.

(I'm writing this while listening to Alan Kay talk at great length. Someone needs to edit that video down to 20 minutes.)

ableal
The Data General MV-8000 (of Soul of a New Machine book fame) also had programmable microcode, if memory serves. The feature was touted, and there were supposed to exist a few brave souls, desperate for performance, who used it.
jamesfisher
Around 51:40 he talks about how semaphores are a bad idea, and says that something called "pseudotime" was a much better idea but which never caught on.

What is this "pseudotime"? Google turns up nothing. Am I mis-hearing what he said?

ecesena
My prof. of calculus once said that complex doesn't mean complicated, yet more group/set/combination. He clearly didn't use any of these terms as all have a specific meaning in Maths (but I don't have any better translation from Italian). I believe it's very true for complex numbers, and I'm trying to keep the same distinction in real life as well. So, for me, complex means putting together more things in a smart way to have a more clear -not more complicated- explanation/solution.
rndn
Around the 31 minute mark, shouldn't the gear and biology analogies be the other way around (biological systems being analogous to bloated software), or am I missing something?
rndn
Oh, the tactics should be realized by biological systems rather than by manual "brick stacking" of technical systems?
DonHopkins
Someone asked Alan Kay an excellent question about the iPad, and his answer is so interesting that I'll transcribe here.

To his credit, he handled the questioner's faux pas much more gracefully than how RMS typically responds to questions about Linux and Open Source. ;)

Questioner: So you came up with the DynaPad --

Alan Kay: DynaBook.

Questioner: DynaBook!

Yes, I'm sorry. Which is mostly -- you know, we've got iPads and all these tablet computers now.

But does it tick you off that we can't even run Squeak on it now?

Alan Kay: Well, you can...

Q: Yea, but you've got to pay Apple $100 bucks just to get a developer's license.

Alan Kay: Well, there's a variety of things.

See, I'll tell you what does tick me off, though.

Basically two things.

The number one thing is, yeah, you can run Squeak, and you can run the eToys version of Squeak on it, so children can do things.

But Apple absolutely forbids any child from putting a creation of theirs to the internet, and forbids any other child in the world from downloading that creation.

That couldn't be any more anti-personal-computing if you tried.

That's what ticks me off.

Then the lesser thing is that the user interface on the iPad is so bad.

Because they went for the lowest common denominator.

I actually have a nice slide for that, which shows a two-year-old kid using an iPad, and an 85-year-old lady using an iPad. And then the next thing shows both of them in walkers.

Because that's what Apple has catered to: they've catered to the absolute extreme.

But in between, people, you know, when you're two or three, you start using crayons, you start using tools.

And yeah, you can buy a capacitive pen for the iPad, but where do you put it?

So there's no place on the iPad for putting that capacitive pen.

So Apple, in spite of the fact of making a pretty good touch sensitive surface, absolutely has no thought of selling to anybody who wants to learn something on it.

And again, who cares?

There's nothing wrong with having something that is brain dead, and only shows ordinary media.

The problem is that people don't know it's brain dead.

And so it's actually replacing computers that can actually do more for children.

And to me, that's anti-ethical.

My favorite story in the Bible is the one of Esau.

Esau came back from hunting, and his brother Joseph was cooking up a pot of soup.

And Esau said "I'm hungry, I'd like a cup of soup."

And Joseph said "Well, I'll give it to you for your birth right."

And Esau was hungry, so he said "OK".

That's humanity.

Because we're constantly giving up what's most important just for mere convenience, and not realizing what the actual cost is.

So you could blame the schools.

I really blame Apple, because they know what they're doing.

And I blame the schools because they haven't taken the trouble to know what they're doing over the last 30 years.

But I blame Apple more for that.

I spent a lot of -- just to get things like Squeak running, and other systems like Scratch running on it, took many phone calls between me and Steve, before he died.

I spent -- you know, he and I used to talk on the phone about once a month, and I spent a long -- and it was clear that he was not in control of the company any more.

So he got one little lightning bolt down to allow people to put interpreters on, but not enough to allow interpretations to be shared over the internet.

So people do crazy things like attaching things into mail.

But that's not the same as finding something via search in a web browser.

So I think it's just completely messed up.

You know, it's the world that we're in.

It's a consumer world where the consumers are thought of as serfs, and only good enough to provide money.

Not good enough to learn anything worthwhile.

scotty79
Do you know of any languages that you could use to specify precisely what a program should do?

Can you use that specification to verify if a program does what it should? I know BDD taken to extremes kinda does that.

But could a specification be verified if it's non-contradictory and perhaps some other interesting things like where are its edges that it does not cover?

raspasov
People that enjoyed this talk should check this out http://www.infoq.com/presentations/Simple-Made-Easy
tdicola
Wow, this is a really great presentation that resonates a lot with me.
stevebmark
I'm 15 minutes in and frankly I'm finding the video hard to follow. Is there an article that accompanies this that might help summarize his points?
andrey-g
What was he talking about at 51:45? Pseudo timer?
dgreensp
Pseudo-time -- I think he's talking about something called Croquet. There's a trail here: http://lambda-the-ultimate.org/node/2021/
e12e
TeaTime is very interesting: http://dl.acm.org/citation.cfm?id=1094861&dl=ACM&coll=DL&CFI...
Jugurtha
Still no love for Heaviside and Poincaré? :(
None
None
dang
Name recognition seems to encourage reflexive upvoting, which is one reason we often take names out of titles.

We've removed "Alan Kay" from this one. (Nothing personal—we're fans.)

None
None
Bootvis
To add a third reason: to save it and view it later.
bhuga
I always watch conference talks at double speed, speaker speed permitting. Alan Kay's voice works fine.

(But I didn't upvote, and your question is valid)

msutherl
I don't think I've ever upvoted after reading/watching what's been shared. I normally upvote, if at all, prior to clicking, otherwise I would never remember.
msane
There was an alan kay video on the front page yesterday.
None
None
Strilanc
Probably. or they may have already seen it, or be able to tell from the first part that it deserves an upvote.

I'm not sure I have a problem with the behavior. The new links page cycles so fast that watching the whole video before upvoting would result in the post not making it to the front page.

zackmorris
One of the key points of the talk happens around the 32-37 minute mark, comparing the crew sizes of various ships. He makes the point that the core of development is really putting together the requirements, not writing code. Unfortunately we've put nearly all of the resources of our industry into engineering rather than science (tactics rather than strategies). What he stated very succinctly is that a supercomputer can work out the tactics when provided a strategy, so why are we doing this all by hand?

That’s as far as I’ve gotten in the video so far but I find it kind of haunting that I already know what he’s talking about (brute force, simulated annealing, genetic algorithms, and other simple formulas that evolve a solution) and that these all run horribly on current hardware but are trivial to run on highly parallelized CPUs like the kind that can be built with FPGAs. They also would map quite well to existing computer networks, a bit like SETI@home.

Dunno about anyone else, but nearly the entirety of what I do is a waste of time, and I know it, deep down inside. That’s been a big factor in bouts with depression over the years and things like imposter syndrome. I really quite despise the fact that I’m more of a translator between the spoken word and computer code than an engineer, architect or even craftsman. Unfortunately I follow the money for survival, but if it weren’t for that, I really feel that I could be doing world-class work for what basically amounts to room and board. I wonder if anyone here feels the same…

heeen2
The comparison of the sailship, submarine and the airplane doesn't really hold, though.

A sailship travels for weeks or months, has not a single piece of automation and has to be maintained during the journey.

Similarily the submarine is meant to travel for long periods of time during which it has to be operated and maintained.

Both of those are designed to be self-sufficient, so they have to provide facilities for the crew, and the crew grows due to maintenance required, medical and battle stations.

The airplane travels for 16 hours tops and is supported by a sizable crew of engineers, ground control, and for passenger planes, cooks and cleaning personell, on the ground.

Animats
Right. Warships try to be self-sufficient and able to deal with battle damage, which means a large on-board crew. The biggest Maersk container ships have a crew of 22. Nimitz-class aircraft carriers have about 3,000, plus 1,500 in the air units. US Navy frigates have about 220.

If you want a sense of how many people are behind an airline flight, watch "Air City", which is a Korean drama series about a woman operations manager for Inchon Airport. The Japanese series "Tokyo Airport" is similar. Both were made with extensive airport cooperation, and you get to see all the details. (This, by the way, is typical of Asian dramas about work-related topics. They show how work gets done. US entertainment seldom does that, except for police procedurals.)

None of this seems to be relevant to computing.

lmm
Are those really a uniquely asian approach? Because the description sounds rather like the (earlier) BBC series Airport.
jameshart
Drama vs documentary.
dperfect
> ...I’m more of a translator between the spoken word and computer code than an engineer, architect or even craftsman.

What are engineers, architects, and craftsmen but translators of spoken word (or ideas) to various media or physical representations?

One could argue that the greatest achievements of history were essentially composed of seemingly mundane tasks.

We'd all love to feel like we're doing more than that, perhaps by creating intelligence in some form. Maybe as software engineers we can create as you say "simple formulas that evolve a solution" - and those kinds of algorithms/solutions seem to be long overdue - I think we're definitely headed in that direction. But at some point, even those solutions may not be entirely satisfying to the creative mind - after all, once we teach computers to arrive at solutions themselves, can we really take credit for the solutions that are found?

If one is after lasting personal fulfillment, maybe it'd be more effective to apply his or her skills (whatever they may be) more directly to the benefit and wellbeing of other people.

flueedo
Yeah but being a translator is a different kind of activity from being an engineer, architect or craftsman. It feels to me that being a Software Engineer -- though currently I've been working with something else -- is kinda like (grossly simplifying it) a really complicated game of Lego or Tetris, where the 'pieces' (APIs, routines, functions, protocols, legacy code, etc, etc) don't always fit where/how they were supposed to, sometimes you can tweak them, but sometimes they're black boxes. Sometimes you have to build some pieces, after giving up on trying to make do with what you got. I think that's what the OP means by time wasted. We waste a lot of time getting the pieces to fit and (as much as we can verify) work as they should when we should be spending our time focusing on what structure we want built after all the pieces are in place.

Engineers, architects and craftsmen don't work, nor waste anywhere near as much time, with as many "pieces" (if any) that behave in such erratic ways.

bbcbasic
I agree. I felt like this within my first month of working professionally in 2001 and ever since.
innguest
Thank you for the apt metaphor. I often struggle with explaining how I feel about this but you nailed it.

We spend too much time getting these tiny pieces to connect correctly, and more often than not they're misaligned slightly or in ways we can't see yet (until it blows up). It's way too much micro for not enough macro. And there isn't even a way to "sketch" as I can with drawing - you know, loosely define the shape of something before starting to fill in with the details. I keep trying to think of a visual way to "sketch" a program so that it would at least mock-run at first, and then I could fill in the function definitions. I don't mean TDD.

minthd
One way to partially do this ,by using an auto generation as far as possible. Naked objects(for complex business software) takes this to the extreme - given a set of domain objects, it creates the rest, including GUI. Than if for example, you want to customize the GUI ,you can.

I think it even makes sense at start to create uncomplete objects, run the mock, as continue. Also, the authors of the framework claim that you work very close to the domain language s you can discuss code with domain experts, so in a sense you're working relatively close to the spec level.

The learning curve is a bit high though, i think it could have been much more successful if it has used GUI tools to design domain objects and maybe a higher level language,

tracker1
Sorry, but UML provides enough definition to accurately describe any process... you generally need two graphs for any interaction though. Most people don't think in these terms. Some can keep it in their heads, others cannot.

It's easier for most people to reason about a simple task. Not everyone can also see how that simple task will interact with the larger whole. I've seen plenty of developers afraid to introduce something as fundamental as a message queue, which is external to a core application. It breaks away from what they can conceptualize in a standing application.

Different people have differing abilities, talent and skills regarding thought and structure. Less than one percent of developers are above average through most of a "full stack" for any given application of medium to high complexity (percent pulled out of my rear). It doesn't mean the rest can't contribute. But it does mean you need to orchestrate work in ways that best utilize those you have available.

innguest
> Sorry, but UML provides enough definition to accurately describe any process...

Which is the opposite of sketching, which means (in drawing) to loosely define the basic shapes of something before accurately describing things with "enough definition".

We're not talking about the same thing at all and I can't relate to the rest of your post.

I'm talking about how there's no way to sketch a program that mock-runs and then fill in the details in a piecemeal fashion.

stuart78
As the son of an architect, I must somewhat dispute this characterization. There are tremendous constraints in that craft which are sometimes parallel to those of software development.

Budget is the first constraint, which limits the available materials and determines material quality.

Components and building codes are a second. While the 'fun' part of building design might be the exteriors and arrangements, a tremendous amount of the effort goes into binding these 'physical APIs' in a standards-complaint manner.

Finally, there is physics. This brings with it the ultimate and unassailable set of constraints.

I know I'm being a bit silly, but my point is that all crafts have frustrations and constraints, and all professions have a high (and often hidden) tedium factor resulting from imperfect starting conditions. These do not actually constrain their ability to create works of physical, intellectual or aesthetic transcendency.

ANTSANTS
Someday, mankind will give birth to an artificial intelligence far more sophisticated than anything we can comprehend, and our species will cease to be relevant. A very long time from then, the universe will become too cool to support life as we know it and our entire history will have been for naught. And long before any of that, you and I and everyone here will die and be completely, utterly forgotten, from the basement dwellers to the billionaires.

It's called an existential crisis. It's not a special Sillicon Valley angst thing, everyone gets them. Try not to let it get to you too much.

thirdtruck
I feel exactly the same way. That's been a major driver behind my current career frustration, and one reason that I want to move away from full-time employment categorically.

So much of it comes down to focusing on everything but requirements. If someone asks for directions to the wrong destination, it doesn't matter how exceptional of a map you draw.

And, despite the importance of requirements, we often trust the requirements gathering tasks to non-technical people. Worse, in large corporate environments, the tasks are often foisted on uncommonly incompetent individuals. When I hear about "Do you want to ...?" popups that are supposed to only have a "Yes" button, I know something is fundamentally broken.

I haven't escaped the corporate rat race yet, but I'm taking baby-steps toward a more self-directed work-life. Here's hoping that such a move helps.

dredmorbius
nearly the entirety of what I do...

You are not even hardly alone. And while I suspect the PoV is prevalent in IT, it's hardly specific to it.

There's a guy who wrote a book about this so long ago that we don't remember his name. He's known only as "the preacher", sometimes as "the teacher" or "the gatherer".

It's Ecclesiastes, in the old testament of the Bible, and begins with "Vanity of vanities, saith the Preacher, vanity of vanities; all is vanity."

I'm quite an atheist, not religious, and generally not given to quoting scripture. I find this particular book exceptionally insightful.

Much of what you describe is in fact reality for most people -- we "follow the money for survival", but the purpose of it all quite frequently eludes me.

I started asking myself two or three years back, "what are the really big problems", largely as I saw that the areas in which they _might_ be addressed, both in Silicon Valley and in the not-for-profit world, there was little real focus on them. I've pursued that question to a series of others (you can find my list elsewhere, though I encourage you to come up with your own set -- I'll tip you to mine if requested).

My interest initially (and still principally) has been focusing on asking the right questions -- which I feel a great many disciplines and ideologies have not been doing. Having settled on at least a few of the major ones, I'm exploring their dynamics and trying to come to answers, though those are even harder.

It's been an interesting inquiry, and it's illuminated to me at least why the problem exists -- that Life (and its activities) seeks out and exploits entropic gradients, reduces them, and either continues to something else or dies out.

You'll find that concept in the Darwin-Lotka Energy Law, named by ecologist Howard Odum in the 1970s, and more recently in the work of UK physicist Jeremy England. Both conclude that evolution is actually a process which maximizes net power throughput -- a case which if true explains a great many paradoxes of human existence.

But it does raise the question of what humans do after we've exploited the current set of entropy gradients -- fossil fuels, mineral resources, topsoil and freshwater resources. A question Odum posed in the 1970s and which I arrived at independently in the late 1980s (I found Odum's version, in Environment, Power, and Society only in the past year).

But yes, it's a very curious situation.

keithpeter
"...I’m more of a translator between the spoken word and computer code..."

At our current level of technology and business sophistication, that strikes me as a hard enough problem to justify a career with good compensation.

Any patterns you are noticing in the translation process?

zackmorris
I shouldn’t post this.

It’s hard for me to single out specific patterns because it feels like the things I’m doing today (taking an idea from a written description, PowerPoint presentation or flowchart and converting it to run in an operating system with a C-based language) are the same things I was doing 25 years ago. It might be easier to identify changes.

If there’s one change that stands out, it’s that we’ve gone from imperative programming to somewhat declarative programming. However, where imperative languages like C, Java, Perl, Python etc (for all their warts) were written by a handful of experts, today we have standards designed by committee. So for example, compiling a shell script in C++ that spins up something like Qt to display “Hello, world!” is orders of magnitude more complicated than typing an index.html and opening it in a browser, and that’s fantastic. But what I didn’t see coming is that the standards have gotten so convoluted that a typical HTML5/CSS/Javascript/PHP/MySQL website is so difficult to get right across browsers that it’s an open problem. Building a browser that can reliably display those standards responsively is nontrivial to say the least.

That affects everyone, because when the standards are perceived as being so poor, they are ignored. So now we have the iOS/Android mobile bubble employing millions of software engineers, when really these devices should have just been browsers on the existing internet. There was no need to resurrect Objective-C and Java and write native apps. That’s a pretty strong statement I realize, but I started programming when an 8 MHz 68000 was considered a fast CPU, and even then, graphics processing totally dominated usage (like 95% of a game’s slowness was caused by rasterization). So today when most of that has been offloaded to the GPU, it just doesn’t make sense to still be using low level languages. Even if we used Python for everything, it would still be a couple of orders of magnitude faster than the speed we accepted in the 80s. But everything is still slow, slower in fact than what I grew up with, because of internet lag and megabytes of bloatware.

I started writing a big rant about it being unconscionable that nearly everyone is not only technologically illiterate, but that their devices are unprogrammable. But then I realized that that’s not quite accurate, because apps do a lot of computation for people without having to be programmed, which is great. I think what makes me uncomfortable about the app ecosystem is that it’s all cookie cutter. People are disenfranchised from their own computing because they don’t know that they could have the full control of it that we take for granted. They are happy with paying a dollar for a reminder app but don’t realize that a properly programmed cluster of their device and all the devices around them could form a distributed agent that could make writing a reminder superfluous. I think in terms of teraflops and petaflops but the mainstream is happy with megaflops (not even gigaflops!)

It’s not that I don’t think regular busy people can reason about this stuff. It’s more that there are systemic barriers in place preventing them from doing so. A simple example is that I grew up with a program called MacroMaker. It let me record my actions and play them back to control apps. So I was introduced to programming before I even knew what it was. Where is that today, and why isn’t that one of the first things they show people when they buy a computer? Boggles my mind.

Something similar to that, that would allow the mainstream to communicate with developers is Behavior-Driven Development (BDD). The gist of it is that the client writes the business logic rather than the developer, in a human-readable way. It turns out that their business logic description takes the same form as the unit tests we’ve been writing all these years, we just didn’t know it. It turns out to be fairly easy to build a static mockup with BDD/TDD using hardcoded values and then fill in the back end with a programming language to make an app that can run dynamically. Here is an example for Ruby called Cucumber:

http://cukes.info

More info:

http://dannorth.net/introducing-bdd/

https://www.youtube.com/watch?v=mT8QDNNhExg

http://en.wikipedia.org/wiki/Behavior-driven_development

How this relates back to my parent post is that a supercomputer today can almost fill in the back end if you provide it with the declarative BDD-oriented business logic written in Gherkin. That means that a distributed or cloud app written in something like Go would give the average person enough computing power that they could be writing their apps in business logic, making many of our jobs obsolete (or at the very least letting us move on to real problems). But where’s the money in that?

basch
>There was no need to resurrect Objective-C and Java and write native apps.

I still find it crazy that the world went from gmail/googlemaps to appstores. It's beyond me why most apps arent anything more than cached javascript and uitoolkits.

I think one reason for this is because devices became personalized. You dont often go check your email on someone elses computer. People dont need the portability of being able to open something anywhere, so the cloud lost some of its appeal. Now its just being used to sync between your devices, instead of being used to turn every screen into a blank terminal.

chipsy
I think there's an element of "pendulum swinging" to how we ended up back at native. The browser is a technology that makes things easy, more than it is one that makes things possible. And "mobile" as a marketplace has outpaced the growth of the Web.

So you have a lot of people you could be selling to, and a technology that has conceptual elegance, but isn't strictly necessary. Wherever you hit the limits of a browser, you end up with one more reason to cut it off and go back to writing against lower level APIs.

In due course, the pendulum will swing back again. Easy is important too, especially if you're the "little guy" trying to break in. And we're getting closer to that with the onset of more and more development environments that aim to be platform-agnostic, if not platform-independent.

heeen2
> I still find it crazy that the world went from gmail/googlemaps to appstores. It's beyond me why most apps arent anything more than cached javascript and uitoolkits.

* because javascript is still not as efficient as native code

* because code efficiency translates directly into battery life

* because downloading code costs bandwidth and battery life

* because html rendering engines were too slow and kludgy at the time

* because people and/or device vendors value a common look and feel on their system

* because not everyone wants to use google services/services on US soil

basch
but in the javascript model most of the processing is offloaded to the cloud when possible, making it MORE energy efficient. ideally you are just transmitting input and dom changes or operational transformations.

timesharing.

web ecosystems can still enforce common uitoolkits.

gmail was just an example.

detaro
As soon as we have reliable and energy efficient wireless connections actually available with perfect coverage, maybe.
seanflyon
The difference between javaScrip in browser and native code has nothing to do with the could. Both can offload processing to remote servers.
e12e
Interestingly the Japanese smartphone market had most of these fixed by 1997 or so; defining a sane http/html subset (as opposed to wap). Less ambitious than (some) current apps - but probably a much saner approach. Maybe it's not too late for Firefox OS and/or Jolla.

[Edit: on a minimal micro-scale of simplification, I just came across this article (and follow up tutorial article) on how to just use npm and eschew grunt/gulp/whatnot in automating lint/test/concat/minify/process js/css/sass &c. Sort of apropos although it's definitely in the category of "improving tactics when both strategy and logistics are broken": prevous discussion on hn: https://news.ycombinator.com/item?id=8721078 (but read the article and follow-up -- he makes a better case than some of the comments indicate)]

stereosteve
Well now you have a new job writing regular expressions for Gherkin...
innguest
I want to subscribe to your newsletter where you write things you shouldn't write. I'm glad you wrote this. I'm not alone!
keithpeter
Definitely a blog theme.

Meta: I'm glad I asked about the patterns now. I had doubts about the extent to which asking that question would add to this thread.

eropple
> There was no need to resurrect Objective-C and Java and write native apps. That’s a pretty strong statement I realize, but I started programming when an 8 MHz 68000 was considered a fast CPU, and even then, graphics processing totally dominated usage (like 95% of a game’s slowness was caused by rasterization). So today when most of that has been offloaded to the GPU, it just doesn’t make sense to still be using low level languages. Even if we used Python for everything, it would still be a couple of orders of magnitude faster than the speed we accepted in the 80s. But everything is still slow, slower in fact than what I grew up with, because of internet lag and megabytes of bloatware.

It's even slower when you write it in Python or Lua. I've built an app heavily relying on both on iOS and Android and it's a pig. This was last year, too, and outsourcing most of the heavy lifting (via native APIs) out of the language in question.

"Low-level languages" (as if Java was "low level") exist because no, web browser just aren't fast enough. There may be a day when that changes, but it's not today.

SixSigma
When I got to that point ("All I ever do is write forms for editing databases") I switched to physical engineering - got qualified in Six Sigma, got qualified in Manufacturing Engineering and now I'm doing a degree in Supply Chain Management. Love it.
jpgvm
Software is a force multiplier.

If the initial force is headed towards uselessness software won't save it from it's fate, it will in fact just get it there faster.

By the same token if you manage to write some software that is magnifying a force of good the results can be staggering.

Software allows us to accomplish more net effect in our lifetimes than was ever possible before. However if we spend our time writing glorified forms over and over again we aren't going to live up to that.

Futurebot
I do often wonder how many people would do something with a great deal more positive impact (research, full-time work on free software, working on that crazy, but possibly actual world-changing idea/startup, whatever) if they didn't have to worry every day about survival, aging out of their field, being forced to run faster just to stay where they are, etc. One day perhaps we'll get to see what that looks like. For now, though, chasing the money is often the rational thing to do, even if that means building yet another hyperlocal mobile social dog sharing photo app.
mavdi
This is exactly how I feel. Even though my income has quadrupled in the last three years I haven't made a single positive impact to anyone's life using my skills. This was a very sad realisation that I've basically been reinventing wheels for a living and not really engineering anything.
jmickey
I find it hard to believe that someone would pay you a salary for three years to do something that is completely useless to everyone.

At the very minimum, the fruits of your labour are useful to your boss/customer. That is already a single positive impact in someone's life right there.

Most likely, your boss/customer then used your developments in their primary business, which, no doubt, affects even more people in a positive way.

I believe being in technology, we have the capability to positively affect many orders of magnitude more people, than in most other industries. So cheer up! ;)

normloman
I'd caution against making the argument "If someone pays you, it's useful to somebody." It reminds me of the naive assumption of economists, that people are rational and seek to maximize utility. But exceptions easily come to mind.

People waste money all the time. Maybe the OP's boss paid him three years salary to do something the boss thought was important, but really just wasted time. And nobody knew it wasted time, because it was hard to measure.

im2w1l
Pursuing promising avenues is not wasted time, even if it ultimately comes to nothing.
normloman
I agree with you, but I've been in situations where management continues wasteful projects because they can't admit failure, can't define goals for the project, or throw good money after bad. I should have been more specific.

A personal example: At a small marketing agency, my boss insisted we needed to offer social media marketing, since "everyone's on facebook now a days." Because we were posting on behalf of clients, we didn't have the expertise to write about the goings on of their business, and often had no contacts in the client organization. We were forced to write pretty generic stuff ("happy valentine's day everybody!" or "check out this news article"). Worse, we had clients who shouldn't be on social media in the first place: plumbers, dentists, and the like. I have never seen anyone like their plumber on facebook. So naturally, these pages were ghost towns.

Yet despite having zero likes and zero traffic, clients insisted on paying someone for social media (They too had read that "everyone's on facebook") and my boss never refused a paying customer.

Want to know how it feels getting paid day in and day out to make crap content that nobody will read? Feels bad.

thirdtruck
I agree. You may be "helping" your manager, but that may mean helping them displace their political opponent for the sole purpose of taking more power for themselves. That, and I've seen far too many inane requirements and heard too many stories of code that was never used to remain especially optimistic.
eli_gottlieb
>Unfortunately I follow the money for survival, but if it weren’t for that, I really feel that I could be doing world-class work for what basically amounts to room and board. I wonder if anyone here feels the same…

Ok. So find where to do that, and do it. If you don't have children to support or other family to stick by, you're only excuse is your fear of doing something too unconventional.

You want to do world-class work? Cool, let's talk about it. What's world-class to you? What would get you out of bed rather than give you an existential crisis?

zackmorris
For me, doing work that eliminates work is something to aspire to. It doesn't even matter if it's challenging. I would actually prefer to be down in the trenches doing something arduous if the end result is that nobody ever has to do it again. But that only happens in the software world. The real world seems to thrive on inefficiency, so much so that I think it's synonymous with employment. Automate the world, and everyone is out of work. I had a lot more written here but the gist of it is that I would work to eliminate labor, while simultaneously searching for a way that we can all receive a basic income from the newfound productivity. That's sort of my definition of the future, or at least a way of looking at it that might allow us to move on to doing the "real" real work of increasing human longevity or exploring space or evolving artificial intelligence or whatever.
eli_gottlieb
Well, there are lots of firms to work at whose primary goal is something that permanently eliminates work. Here's a list I just happened to have on hand: https://hackpad.com/Science-Tech-Startups-zSZ0KdT6Zk1
jes5199
Much of programming is mindless plumbing, but the problem of specifying what problem you're trying to solve will never go away. Focus your attention there.
stickperson
Doesn't that depend on where you work and what you're working on though?
tonyedgecombe
Most workplaces are broken in some way or other, eventually you either learn to grin and bear it or you find a way to escape the system.
innguest
Yes, I feel exactly the same. I ride the rollercoaster of depression and the impostor syndrome is definitely there. Most of what I do is mired in incidental complexities that have little to do with the task at hand. Programming in itself is simple because it's a closed system that you can learn and master. What makes it seem hard is having to manage the logistics of it with the crude tools we have (checking logs instead of having a nice graphical control panel for instance). I also feel like all the problems I have to solve are ultimately the same, even if in wildly different areas, because it's all about how do I put these function calls in the right order to get my result. I can never shake the feeling that there could be a generic-programming genetic algorithm that tries a bunch of permutations of function calls and the way the results are the arguments for the next calls, given a list of functions I know are necessary for the solution. I always feel like I am still having to do computer things in my head, while sitting in front of the computer.
david927
That's exactly it. I sometimes wonder how many brilliant people have left this industry because they were similarly enlightened. (One prominent person in applied computer science recently told me she was considering going to go into medicine because she wanted to do something useful.)

It doesn't have to be like this. We have the brain power. But I feel we squander it on making silly websites/apps when we could be making history.

jsprogrammer
I think the previous poster already identified the problem:

>I follow the money for survival

Maybe we can have a system where people can survive without money?

colund
Nice, however that would remove an incentive for innovation
Retra
Who paid you for your comment?

Take a look at this thread -- scroll all the way to the bottom. What is the incentive for all of that?

jsprogrammer
I don't think anything says you can't use another incentive.

Innovate or die (literally), doesn't seem like the wisest philosophy to base life on this planet around.

None
None
perfunctory
> incentive for innovation

Do you think Alan Kay was incentivised by money to do all the stuff he talked about in this video?

minthd
OK - so alan kay did working on creating this great system. Is there any knowledge from it used in a practical application today ? Probably no, because

a.It's probably too soon.

b.It often takes boring work(and money) to transfer pure ideas to successful practical applications.

perfunctory
we were talking about insentives for innovation, not insentives for boring work. I agree that boring work should be insentivised by money.
jsprogrammer
I'd be interesting in going the course of: boring work should be eliminated through automation or obsolescence.
dredmorbius
See Keith Weslowski's recent split.

I suspect a similar calculus may have inspired Joey Hess's retreat to a farm.

I can think of others: jwz, Bret Glass, Cringely.

scotty79
I know about one such person that after having successful IT Business career moved to another country (mine) and became neurosurgeon (at absurdly stupidly low pay that doctors in my country get).
sukilot
You know what medical professinals do? Get patients healthy so the patients can enjoy apps and sites.
Jan 07, 2015 · 1 points, 0 comments · submitted by mpweiher
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.