HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
We Really Don't Know How To Compute!

Gerald Sussman · InfoQ · 240 HN points · 27 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Gerald Sussman's video "We Really Don't Know How To Compute!".
Watch on InfoQ [↗]
InfoQ Summary
Gerald Jay Sussman compares our computational skills with the genome, concluding that we are way behind in creating complex systems such as living organisms, and proposing a few areas of improvement.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Dec 25, 2020 · 35 points, 3 comments · submitted by gjvc
rgbrenner
2011
None
None
thom
Anyone got a handy Christmas Day précis of this?
bob1029
This is pretty hard to distill into something manageable for a HN comment. There is much exploration of hypothetical programming models and some interesting arguments posed regarding the title of this presentation.

I would at least pop into the middle of the video for a few minutes to see if anything clicks. It's pretty esoteric stuff compared to what you would usually find on InfoQ.

Apr 03, 2020 · 1 points, 0 comments · submitted by dempedempe
Have you ever seen one of Professor Sussman's talks where he analyzes a circuit diagram?

For example, see Prof Sussman's 2011 StrangeLoop talk (circuit analysis example begins at ~25 min mark)...

We Really Don't Know How To Compute! https://www.infoq.com/presentations/We-Really-Dont-Know-How-...

If you have the hardware schematic or the physical hardware and enough time, you can figure out what the hardware does. You can determine what its constraints are, and if you understand it well enough (for simplicity's sake, let's say you understand the hardware up to the level of the engineers who designed it), you can tell what the hardware system can and can't do and what type of codes are required to make the hardware work. You can tell at a low level what the GarageBand developers had to work with when they designed their game. And once you know the required codes, you can write software to generate the codes to make it work. And if you're really good and have the right tools, you can analyze the hardware and/or model the data flows to determine what the optimal data structures must be based on the hardware capacity constraints and data flow.

Google "reverse engineering hardware chip circuits" or watch Ken Shirriff's 2016 Hackaday talk...

Reading Silicon: How to Reverse Engineer Integrated Circuits https://www.youtube.com/watch?v=aHx-XUA6f9g

On SICP and Lisp, I was recently asked:

    "Thanks a lot for this insightful reply! I've read about how 
    powerful are Lisp languages (for example for AI), my question is: 
    does Emacs really use all this theoretically powerful functionality 
    of these languages? In what way is this metalinguistic abstraction 
    used? In the built-in functions of Emacs, the powerful packages 
    made by the community, or the Elisp tweaking of a casual Emacs user 
    to customize it (or all three of those).

    I've read a lot of people praising and a lot of people despising 
    Elisp. Do these people who dislike Elisp do it because they want a 
    yet more powerful Lisp dialect (like Scheme) or because they want 
    to use a completely different language?

    PD: Excuse my ignorance, I'm still learning about programming. As a 
    side note, would you recommend me to read SICP if I just have small 
    notions of OOP with Python and Java and I want to learn more about 
    these topics? Will I be able to follow it?
Let me start from the end: Reading SICP changed everything I thought I knew about programming and shattered any sort of non-empirical foundation - that I had built up to that point - regarding how my mind worked and how I interfaced with reality. It's not just a book about programming, there are layers of understanding in there that can blow your worldview apart. That said, you do need to make an effort by paying attention when you go through the book and (mostly) doing the exercises. The videos on youtube are also worth watching in-parallel with reading the book. The less you know about programming when you go through SICP, the easier it will be for you to "get" it since you'll have no hardwired - reinforced by the passage of time and investment of personal effort - prior notions of what programming is and how it should be done.

* Metalinguistic abstraction

Short answer: all three.

Long answer: The key idea behind SICP and the philosophy of Lisp is metalinguistic abstraction which can be described as coming up with and expressing new ideas by first creating a language that allows you to think about said ideas. Think about that for a minute.

It follows then that the 'base' language [or interpreter in the classical sense] that you use to do that, should not get in your way and must be primarily focused in facilitating that process. Lisp is geared towards you building a new language on top of it, one that allows you to think about certain ideas, and then solve your problems in that language. Do you need all that power when you're making crud REST apps or working in a well-trodden domain? Probably not. What happens when you're exploring ideas in your mind? When you're thinking about problems that have no established solutions? When you're trying to navigate domains that are fuzzy and confusing? Well, that's when having Lisp around makes a big difference because the language will not get in your way and it'll make it as easy as possible for you to craft tools that let you reason effectively in said domains.

Let's use Python as an example since you mentioned it. Python is not that language since it's very opinionated and constrained by its decisions in the design space and, additionally, has been deliberately created with entirely different considerations in mind (popular appeal). This is very well illustrated by the idiotic Python moto "There's only one way to do it" which, in practice, isn't even the case for Python itself. A perfect example of style over substance, yet people lap it up. You can pick and choose a few features that superficially seem similar to Lisp features but that does not make Python a good language for metalinguistic abstraction. This is a classic example of the whole of Lisp being much more than the sum of its parts, and in reality languages like Python don't even do a good job of reimplementing some of these parts. This is the reason I don't want to just list a bunch of Lisp features that factor into metalinguistic abstraction (e.g. macros and symbols).

* Feedback loops

The other key part of Lisp and also something expressed fully by the Lisp machines is the notion of a cybernetic feedback loop that you enter each time you're programming. In crud, visual terms:

[Your mind - Ideas] <--> Programming Language <--> [Artifact-in-Reality]

You have certain ideas in your mind that you're trying to manipulate, mold and express through a programming language that leads to the creation of an artifact (your program) in reality. As you see from my diagram, this is a bidirectional process. You act upon (or model) the artifact in reality but you're also acted upon by it (iterative refinement). The medium is the programming language itself. This process becomes much more effective the shorter this feedback loop gets. Lisp allows you to deliberately shorten that feedback loop so that you _mesh with your artifact in reality_. Cybernetic entanglement if you will. Few other languages do that as well as Lisp (Smalltalk and Forth come to mind). Notice that I emphasized your mind and reality/artifact in that previous diagram, but not the medium, the programming language. I did that in order to show that the ideal state is for that programming language not to exist at all.

* Differences between Lisps

All Lisps allow you to express metalinguistic abstraction (they wouldn't be Lisps otherwise). Not all Lisps allow you to shorten the feedback loop with the same efficiency.

The Lisps that best do the latter come out of the tradition of the Lisp machines. Today this means Common Lisp and Emacs Lisp (they're very similar and you can get most of what Common Lisp offers on the language level in Emacs Lisp today). For that reason, I don't think Scheme is more powerful than Emacs Lisp, since Scheme lacks the focus on interactivity and is very different to both CL and Emacs Lisp.

As far as other people's opinions go, my rule of thumb is that I'd rather educate myself about the concepts and form my own opinions than blindly follow the herd. Which is why I also think that people who are sufficiently impressed by an introduction to Lisp (such as the OP article) to want to learn it and ask "Which Lisp should I learn? I want something that is used a lot today" are completely missing the point. You'll notice that most programming today is done for money, in languages that are geared towards popularity and commoditization. For me, programming is an Art or high philosophy if we take the latter to stand for love of wisdom. And as someone has said, philosophical truths are not passed around like pieces of eight, but are lived through praxis.

P.S. The talk by Gerry Sussman (https://www.infoq.com/presentations/We-Really-Dont-Know-How-...) that I saw mentioned in HN yesterday provides an excellent demonstration of metalinguistic abstraction and also serves as an amalgamation of some of my other ideas about Lisp.

pjmlp
What I love about Lisp, ML and logic languages is how they come down to the basics of CS, Algorithms + Data Structures.

Ideally the same solution can then be applied to a single core, hybrid CPU + GPU, clustered environments, whatever.

Yes, abstractions do still leak, specially if optimization for the very last byte/ms is required, but productivity is much higher than if that would be a concern by each line of code being produced.

And 90% of the times the solutions are good enough for the problem being solved.

Here is the expanded URL of the tinyurl link to halve the changes of getting a 404 or similar in the far future, and maybe allow finding the content via the Wayback Machine in case infoq also disappears.

https://www.infoq.com/presentations/We-Really-Dont-Know-How-...

https://www.infoq.com/presentations/We-Really-Dont-Know-How-... is the version of the talk I've seen. Highly recommended to everyone.

I have yet to get his classical mechanics book. Gonna have to pull the trigger on that soon.

wrycoder
I haven't had a chance to ask him why he didn't cover special relativity. Seemed like a great fit for a computation-based course.

Edit: Thanks for that - it's more detailed than the talks I attended (LibrePlanet).

Jul 12, 2017 · 4 points, 0 comments · submitted by tosh
You're all over the place here and your arguments make no sense whatsoever when examined.

First, you conflate mass-appeal with some sort of objective "better" criterion which is of course bonkers. To use one of your own examples against you, there are hundreds of thousands of Java monkeys out there that are using glue other people made to tie together rocks to build stonewalls. Which do fail as soon as the weather stops being nice. Security (you should look into Java deserialization bugs), reliability, performance what do they know about any of these things?

Second, you conflate late-binding as present in Lisp and Smalltalk with late-binding present in other dynamic languages. The two are not equivalent, a perfect example of the whole is greater than the sum of its parts.

Lisp and Smalltalk will never become popular (read my previous comment), but that does not mean that they do not sit on an apex and still have a lot to give. To anyone interested in the "craft of programming", "the Art", there is nothing better period. Here are some references for you, from the masters themselves:

[1] https://www.infoq.com/presentations/We-Really-Dont-Know-How-...

[2] https://www.youtube.com/watch?v=YyIQKBzIuBY

[3] https://www.youtube.com/watch?v=FvmTSpJU-Xc

My favorite thing having finally taken a dive into lisp and SICP, is not that it is very friendly to functional. It is that it is very friendly to showing how it all works.

My favorite section is where they go over making a constraint based system that will either calculate degrees F or degrees C. Or validate that the two given values are accurate. All depending on what you have entered. And this is done from the ground up. No hidden magic of the system has to support you, by and large. (Not strictly true, as this does make use of GC and such throughout.)

If you hadn't seen it, https://www.infoq.com/presentations/We-Really-Dont-Know-How-... is a great video showing Sussman's style. He also showcases a program they wrote that could solve circuits based on this general idea.

mrcsparker
Thank you so much for linking this video. I had no idea it existed. Watching that lecture was so much fun.

I wish that there was a camera pointing to the crowd when he said the more controversial things.

I didn't know that Sussman hired Stallman. Their work changed our lives.

gavinpc
This is excerpted (as the "technology" segment) of another great video, Bret Victor's "Humane Representation of Thought." (at about thirty minutes in)

https://vimeo.com/115154289

Thanks. You might also like this talk by Edward Kmett on Propagators (not a talk on sheaf theory, but it touches on similar ideas) -- I having been thinking on how to unify the ideas in sheaf theory with propagators and would be curious if others see similar connections:

Edward Kmett on Propagators https://www.youtube.com/watch?v=DyPzPeOPgUE

"Propagators" as introduced by Sussman et al. from MIT a few years back:

The Art of the Propagator http://groups.csail.mit.edu/mac/users/gjs/propagators/

We Really Don't Know How to Compute https://www.infoq.com/presentations/We-Really-Dont-Know-How-... (Sussman at StrangeLoop)

Constraints and Hallucinations: Filling in the Details https://www.youtube.com/watch?v=mwxknB4SgvM (Sussman at Google)

Can you name a single, widely used (outside the Haskell camp) opensource application written in Haskell?

I can only think of "darcs" which for all intents and purposes was a failure. IIRC, one of the darcs retrospectives I've read, pointed out that GHC runtime behavior and its hard-to-foresee and hard-to-measure complexity properties, was a problem.

Moreover, another high-profile failure that springs to mind, is Joel Reymont (wagerlabs.com) trying to write a high-performance poker server in Haskell and eventually abandoning it in disgust due to non-predictable behavior of the GHC runtime. He delivered it (with assorted paeans) in Erlang instead, which proved quite lucrative for him.

I used Haskell for 2 years during my undergraduate degree, for program analysis. It was a good choice for such a theoretical domain. Some time later, I had another look at applying it to more practical problems. Besides the slight turn-off due to cultish behavior - which is also obvious in this thread, many posts excuse the lack of documentation or posit it as not a problem (!?), if not outright exalt it! - I got from the community itself, the language felt completely sterile and - most importantly - not fun. The syntax is also atrocious.

I concluded that if static typing was needed, I would pick ML every single time over Haskell. I also have no affinity with Haskell's type system (in the sense that it leads to better programs) or its interactive nature (laughable compared to Lisp/Smalltalk). I also remember something Sussman said, "Haskell is the most advanced of the obsolete languages" [1], and can't help but chuckle.

[1] https://www.infoq.com/presentations/We-Really-Dont-Know-How-...

greenrd
Darcs was mostly a failure because no-one, not even the original author, could satisfactorily explain its patch model - whereas git's patch model is simple and easy to understand. That has nothing to do with Haskell.
vanessa98
Oh man, please don't ever use Joel Reymont as a language reference - his noisy, uninformed, continuous churn through various programming languages was exhausting back in the day. Just be thankful the hack has gone quiet.
PopsiclePete
The only piece of Haskell I rely on is Pandoc. Which is awesome. But I think hardly a shining example of "Look what you can do in Haskell that you can't do in X!"
codygman
https://github.com/Microsoft/bond
sjakobi
> Can you name a single, widely used (outside the Haskell camp) opensource application written in Haskell?

How about pandoc, PostgREST, git-annex, the Elm compiler?

You can find more examples here: https://github.com/Gabriel439/post-rfc/blob/master/sotu.md

sjakobi
What's the reason for the downvotes?
Nov 28, 2016 · Animats on Thought as a Technology
This is interesting, but combines about five ideas. Expanding any one of those would be useful. The notion of a "transformative interface" seems to combine two concepts - "Wow factor" and "representation that yields insight". Those are different. The latter is more useful (but the former is more profitable.) Feynman diagrams come to mind.

Sussman's talk on how to think about circuits is here.[1] Here are the slides.[2] The video shows Sussman's talking head and him pointing at an off-screen display of the slides, which is not too helpful.

[1] https://www.infoq.com/presentations/We-Really-Dont-Know-How-... [2] http://web.mit.edu/xtalks/Sussman-xTalk-3-2-16.pdf

chewxy
often though, representations that yield insight have some wow factor to them, especially if the world hasn't seen it before.

Between Nielsen and Bret Vector, I'm kinda jealous about what my future kids can learn with much more ease.

https://www.infoq.com/presentations/We-Really-Dont-Know-How-... is by far my favorite technical talk right now.

Sussman goes over some interesting ideas on the provenance of calculations and asserts that "exact" computation is possibly not worth the cost.

We Really Don't Know How To Compute! [0] is probably my top... next to the christmas tree lectures.

[0] https://www.infoq.com/presentations/We-Really-Dont-Know-How-...

Yes and yes. Felleisen: https://www.youtube.com/watch?v=JBmIQIZPaHY, Sussman: http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...
willtim
Maths will never be obsolete. It's the safest bet if you value your time.
dkarapetyan
I agree. Sussman is a mathematician and Felleisen is a programming language researcher which is close enough of a field to be considered math.
Jan 10, 2016 · 14 points, 0 comments · submitted by dkarapetyan
Constraint-based things to check out:

1. Sussman's talk "We Really Don't Know How to Compute" (http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...) and his work on the Propagator (https://github.com/ProjectMAC/propagators).

2. Gremlin Graph Traversal Machine (http://arxiv.org/pdf/1508.03843v1.pdf, http://www.datastax.com/dev/blog/the-benefits-of-the-gremlin...)

3. Clojure core.logic (https://github.com/clojure/core.logic) and core.match (https://github.com/clojure/core.match)

4. Yedalog: Exploring Knowledge at Scale (http://research.google.com/pubs/pub43462.html, https://www.youtube.com/watch?v=SP9zS43FRzQ)

5. Datomic Datalog (http://docs.datomic.com/query.html)

Prof Carlsson's (http://math.stanford.edu/~gunnar/) "Topology and Data" paper provides a good overview: http://www.ams.org/journals/bull/2009-46-02/S0273-0979-09-01...

My recent dive into the literature has enlightened my thinking in terms of database and systems design. It has led me to think more in terms of properties, invariants, intervals, constraints, and dynamic fluidity -- "there are no things" (only actions and properties): https://edge.org/response-detail/11514

Maybe the antiquated abstractions we have been using for database systems is what limits us. Maybe we need to stop thinking in terms of things -- objects, partitions, and static state -- and start thinking in terms of millions of fluid dynamic processes. Maybe Jim Starkey is on the right track: http://www.nuodb.com/about-us/jim-starkey.

Sussman seems to be converging there too -- see his talk "We Really Don't Know How to Compute" (http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...) and his work on the Propagator (https://github.com/ProjectMAC/propagators).

The rapid flow of data id stressing our system designs is making this more apparent, and we're starting to see stream processing systems emerge like Google Dataflow and Apache Flink. Ideas from functional programming and immutable state are looking more prescient. Now our database management systems need to evolve.

  "At no period in human culture have men understood the     
  psychic mechanisms involved in invention and technology. 
  Today it is the instant speed of electric information that, 
  for the first time, permits easy recognition of the 
  patterns and the formal contours of change and development. 
  The entire world, past and present, now reveals itself to 
  us like a growing plant in an enormously accelerated movie. 
  Electric speed is synonymous with light and with the 
  understanding of causes."

  — Marshal McLuhan, Understanding Media: The Extensions of Man (1964)
'okram's recent paper provides a new graph-based model for stateless functional flows that could be applied in other systems: See "Quantum Walks with Gremlin" (http://arxiv.org/pdf/1511.06278v1.pdf)

And Vladimir Kornyak touches on some of these ideas in these papers:

1. On Compatibility of Discrete Relations (2005) http://arxiv.org/pdf/math-ph/0504048.pdf

2. Structural and Symmetry Analysis of Discrete Dynamical Systems (2010) http://arxiv.org/pdf/1006.1754.pdf

3. Discrete Dynamical Models: Combinatorics, Statistics and Continuum Approximations (2015) http://mmg.tversu.ru/images/publications/2015-vol3-n1/Kornya...

pramodliv1
It's quite clear from the Ayasdi website that they're targeting enterprise customers. I wish they had an API similar to clarif.ai or wit.ai.
espeed
Javaplex: Persistent Homology and Topological Data Analysis Library (http://appliedtopology.github.io/javaplex/) -- primarily developed by the Computational Topology workgroup at Stanford.
pramodliv1
Thank you!
xtacy
Thanks for references.

I've read the topology and data paper; it lays down motivations for TDA, but it doesn't quite connect it to existing literature on dimensionality reduction and manifold learning and explain -- "Here's something you can learn by using tools from TDA, but not existing methods." The best I could see was that it produces results similar to existing methods.

See Sussman's talk at Strange Loop 2011: "We Really Don't Know How To Compute!" (http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...)

Thomas Kristensen is working on implementing the propagator in Clojure (https://github.com/tgk/propaganda) -- you can see his ClojureConj talk here: https://www.youtube.com/watch?v=JXOOO9MLvhs

I don't know enough to weigh in on if a person is just a gig. I'm just going off of this[1] presentation.

[1] http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...

Here's the talk[0] for anyone wondering; I'm watching it now.

Thanks for referencing it, glad I watched it.

Edit: So far this is a talk comparing computer systems to the human genome in a somewhat abstract manner. Sussman is the creator of Scheme and wrote SICP.

[0]-http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...

Thomas Kristensen recently released a Propagator implementation for Clojure (https://github.com/tgk/propaganda) -- Propagators as in Sussman's paper "Art of the Propagator" and his StrangeLoop talk "We Really Don't Know How to Compute" (http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...).

"Art of the Propagator": http://web.mit.edu/~axch/www/art.pdf

"Revised Report on the Propagator Model": http://groups.csail.mit.edu/mac/users/gjs/propagators/

Original Scheme implementation: http://groups.csail.mit.edu/mac/users/gjs/propagators/propag... [tar file]

Thomas' 2013 Clojure Conj talk: http://www.youtube.com/watch?v=JXOOO9MLvhs

ever since watching sussman's talk http://www.infoq.com/presentations/We-Really-Dont-Know-How-T... i've been curious to see these details. really good post.

the jibe about, "our competitors," at the end of the preface left a bad taste in my mouth, though. i'd really like to think that was tongue-in-cheek and people as tenured as these guys, if anyone, can afford to think of math and science as the collaborative effort that it is rather than a competition.

I'm sorry, but this has nothing to do with "us programmers." Humans, by and large, don't have a good way to tell a desired story with all of the branch points accounted for. This is why cookbooks don't have recipes with "if you cooked it too long here, switch to this recipe..."

Hell, this goes so far as to how we currently give people directions to cross a state. Consider http://goo.gl/maps/4F89Z This clearly doesn't work because there is not a car that can do that trip completely non-stop (well, that I know of). Yet, it would be silly to even try and account for all of the random stops that a person making that trip would want to do. How is this the fault of "embracing integers, strings, lists and trees?"

Having looked over all of the various ways that we give directions to kids lately, I'm becoming more and more convinced that the higher theories of programming are missing something (for a great presentation on this idea, see [1]). Look at how music is written. Look at how lego directions are written. Ikea directions. Shampoo directions. Cooking recipes. All of these are relatively successful methods for communicating often complicated sets of directions. None of them necessarily encode things to the level of detail that our programs require.

Are you concerned that all of my examples are not necessarily life and death detail oriented? See the blueprints of rockets. Sail boat designs from ancient times. Heck, sailing routes.

If anything, I would say that our programs should, instead of offering fancy ways to escape the consequences, programs should offer ways to "get back on the desired path" of what was supposed to be accomplished. The canonical "FileNotFound?" Don't put my program somewhere where I have to somehow set things back up to try again. Resume right here once the file has been found. (Sadly, I don't have any good thoughts on this. Just late night ramblings.)

[1] http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...

Aug 07, 2013 · 2 points, 0 comments · submitted by llambda
It is not just spaghetti code, though. Right? It is code that does not constantly check that input fits an expected pattern. Which is most code. I'm reminded of this talk by Sussman.[1] Essentially, in software we are (understandably) adamant about not modifying the input of our application for "pure" functions. Nature goes right the opposite, often attempting to make the input work at all costs.

So, whereas a schema based application that models a user with an address is unable to cope with being given two addresses. A schema-less approach where it just stores blobs simply stores what it was given. If code that reads this is unable to make sense of multiple addresses, it will raise an error. Not necessarily unlike a user being told to send a package to an address, but given a list of addresses.

[1] http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...

Summary of the links shared here:

http://blip.tv/clojure/michael-fogus-the-macronomicon-597023...

http://blog.fogus.me/2011/11/15/the-macronomicon-slides/

http://boingboing.net/2011/12/28/linguistics-turing-complete...

http://businessofsoftware.org/2010/06/don-norman-at-business...

http://channel9.msdn.com/Events/GoingNative/GoingNative-2012...

http://channel9.msdn.com/Shows/Going+Deep/Expert-to-Expert-R...

http://en.wikipedia.org/wiki/Leonard_Susskind

http://en.wikipedia.org/wiki/Sketchpad

http://en.wikipedia.org/wiki/The_Mother_of_All_Demos

http://io9.com/watch-a-series-of-seven-brilliant-lectures-by...

http://libarynth.org/selfgol

http://mollyrocket.com/9438

https://github.com/PharkMillups/killer-talks

http://skillsmatter.com/podcast/java-jee/radical-simplicity/...

http://stufftohelpyouout.blogspot.com/2009/07/great-talk-on-...

https://www.destroyallsoftware.com/talks/wat

https://www.youtube.com/watch?v=0JXhJyTo5V8

https://www.youtube.com/watch?v=0SARbwvhupQ

https://www.youtube.com/watch?v=3kEfedtQVOY

https://www.youtube.com/watch?v=bx3KuE7UjGA

https://www.youtube.com/watch?v=EGeN2IC7N0Q

https://www.youtube.com/watch?v=o9pEzgHorH0

https://www.youtube.com/watch?v=oKg1hTOQXoY

https://www.youtube.com/watch?v=RlkCdM_f3p4

https://www.youtube.com/watch?v=TgmA48fILq8

https://www.youtube.com/watch?v=yL_-1d9OSdk

https://www.youtube.com/watch?v=ZTC_RxWN_xo

http://vimeo.com/10260548

http://vimeo.com/36579366

http://vimeo.com/5047563

http://vimeo.com/7088524

http://vimeo.com/9270320

http://vpri.org/html/writings.php

http://www.confreaks.com/videos/1071-cascadiaruby2012-therap...

http://www.confreaks.com/videos/759-rubymidwest2011-keynote-...

http://www.dailymotion.com/video/xf88b5_jean-pierre-serre-wr...

http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hic...

http://www.infoq.com/presentations/click-crash-course-modern...

http://www.infoq.com/presentations/miniKanren

http://www.infoq.com/presentations/Simple-Made-Easy

http://www.infoq.com/presentations/Thinking-Parallel-Program...

http://www.infoq.com/presentations/Value-Identity-State-Rich...

http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...

http://www.mvcconf.com/videos

http://www.slideshare.net/fogus/the-macronomicon-10171952

http://www.slideshare.net/sriprasanna/introduction-to-cluste...

http://www.tele-task.de/archive/lecture/overview/5819/

http://www.tele-task.de/archive/video/flash/14029/

http://www.w3.org/DesignIssues/Principles.html

http://www.youtube.com/watch?v=4LG-RtcSYUQ

http://www.youtube.com/watch?v=4XpnKHJAok8

http://www.youtube.com/watch?v=5WXYw4J4QOU

http://www.youtube.com/watch?v=a1zDuOPkMSw

http://www.youtube.com/watch?v=aAb7hSCtvGw

http://www.youtube.com/watch?v=agw-wlHGi0E

http://www.youtube.com/watch?v=_ahvzDzKdB0

http://www.youtube.com/watch?v=at7viw2KXak

http://www.youtube.com/watch?v=bx3KuE7UjGA

http://www.youtube.com/watch?v=cidchWg74Y4

http://www.youtube.com/watch?v=EjaGktVQdNg

http://www.youtube.com/watch?v=et8xNAc2ic8

http://www.youtube.com/watch?v=hQVTIJBZook

http://www.youtube.com/watch?v=HxaD_trXwRE

http://www.youtube.com/watch?v=j3mhkYbznBk

http://www.youtube.com/watch?v=KTJs-0EInW8

http://www.youtube.com/watch?v=kXEgk1Hdze0

http://www.youtube.com/watch?v=M7kEpw1tn50

http://www.youtube.com/watch?v=mOZqRJzE8xg

http://www.youtube.com/watch?v=neI_Pj558CY

http://www.youtube.com/watch?v=nG66hIhUdEU

http://www.youtube.com/watch?v=NGFhc8R_uO4

http://www.youtube.com/watch?v=Nii1n8PYLrc

http://www.youtube.com/watch?v=NP9AIUT9nos

http://www.youtube.com/watch?v=OB-bdWKwXsU&amp;playnext=...

http://www.youtube.com/watch?v=oCZMoY3q2uM

http://www.youtube.com/watch?v=oKg1hTOQXoY

http://www.youtube.com/watch?v=Own-89vxYF8

http://www.youtube.com/watch?v=PUv66718DII

http://www.youtube.com/watch?v=qlzM3zcd-lk

http://www.youtube.com/watch?v=tx082gDwGcM

http://www.youtube.com/watch?v=v7nfN4bOOQI

http://www.youtube.com/watch?v=Vt8jyPqsmxE

http://www.youtube.com/watch?v=vUf75_MlOnw

http://www.youtube.com/watch?v=yJDv-zdhzMY

http://www.youtube.com/watch?v=yjPBkvYh-ss

http://www.youtube.com/watch?v=YX3iRjKj7C0

http://www.youtube.com/watch?v=ZAf9HK16F-A

http://www.youtube.com/watch?v=ZDR433b0HJY

http://youtu.be/lQAV3bPOYHo

http://yuiblog.com/crockford/

ricardobeat
And here are them with titles + thumbnails:

http://bl.ocks.org/ricardobeat/raw/5343140/

waqas-
how awesome are you? thanks
Expez
Thank you so much for this!
X4
This is cool :) Btw. the first link was somehow (re)moved. The blip.tv link is now: http://www.youtube.com/watch?v=0JXhJyTo5V8
Gerald Jay Sussman (of SICP): We Really Don't Know How To Compute!

http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...

agentultra
I consider Sussman to be one of the eminent genius' of computing. This talk is one of many that blew my mind. Time and again he demonstrates that his use of Scheme is simply for demonstration (I mean to emphasize this because the correlation with SICP/Sussman and Scheme is often regurgitated in FUD). It's not the languages we use that are important -- it is the models by which we compute things which is!
minikomi
His SICP lectures are also a delight. He seems genuinely happy to be talking about computing - and turning people onto what he loves. A great teacher
taeric
Because these just got added to my watch list, here is a link for others. :) Thanks for letting this ignorant programmer realize that these were available!

http://ocw.mit.edu/courses/electrical-engineering-and-comput...

Gerald Sussman: We Really Don't Know How To Compute (Strange Loop 2011) [1h04]

http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...

Nov 14, 2012 · 2 points, 0 comments · submitted by fbuilesv
When I saw that video I immediately thought of what Sussman said about Haskell a year or so ago. “Haskell is the most advanced of the obsolete programming languages!” [1] I think Google TV is the most advanced form of an obsolete medium.

I don't have a TV, and all of my friends who do rarely use theirs. Added together, I'd say I watch about an hour a fortnight of TV material. And that's mostly on my phone or computer's screen.

This might be a generation thing. With tablets and smartphones everywhere I don't feel I'm missing out. On the other hand, my father needs the constant babble of the TV. It could also be that my circles and I are outliers against the norm, but I very much doubt it.

[1] http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...

paganel
> I don't have a TV, and all of my friends who do rarely use theirs. Added together, I'd say I watch about an hour a fortnight of TV material. And that's mostly on my phone or computer's screen.

For watching sports TV is still the best option out-there. For the rest, I think you're spot on, I know it's anecdotical but I haven't watched a movie on TV for more than one year, and "entertainment" TV shows I avoid completely.

jergason
Disclaimer: I work at i.tv, makers of Nintendo TVii[0], so I have some vested interest in TV.

This is definitely a generational/demographics thing. The average person spends 4:35 a day watching live TV. This is declining, but it is compared to 46 minutes doing everything else on a TV, including watching DVDs, playing games, and DVR [1].

So while the amount of live TV being watched is declining, it still dwarfs other media consumption among the general public.

That said, I think the future is in things like Google TV or Nintendo TVii integrating with DVR or streaming video providers to provide better discovery and better interactive experience while you are actually watching something.

[0] http://reviews.cnet.com/8301-33199_7-57512253-221/nintendo-t... [1] http://nielsen.com/content/dam/corporate/us/en/reports-downl...

myko
Are you and your friends not terribly interested in sports? It's hard to catch CFB games on my tablet or PC (at least legally and with good quality, live).

This is probably the main reason my wife and I still use our TV regularly.

ImprovedSilence
I don't have TV, but I make due with sports by using a mixture of questionable streams, friends places, and sports bars.
saljam
Ha. That's a good point. I don't really have much interest in spectator sports. Which brings another point to mind; I found following TdF on the web with all the interactive stats easily accessible very enjoyable. It's a shame there wasn't much live video coverage.

Then again, I'm not really the man to talk about sports.

kaolinite
I don't watch TV but I do use my TV quite a bit. I watch shows on Netflix, buy movies on iTunes (via Apple TV) and play games on the PS3.

As for your comment on tablets, they're best IMO when coupled with a big screen. Nothing is as cool as browsing show descriptions and reading reviews on the iPad before pressing the play button which starts playing it on the TV (via AirPlay - sadly the Apple TV's Netflix app and the iPad's Netflix app don't talk to each other so you end up streaming the content from Netflix to the iPad, then streaming it to the TV).

tangue
Out of topic but Sussman's talk just blew my mind. Thanks for this gem.
Goronmon
It could also be that my circles and I are outliers against the norm, but I very much doubt it.

This seems likely. I personally don't know anyone who doesn't own a TV. In fact, I'm one of the rare few who doesn't have some form of cable or satellite service.

eitally
My household didn't contain a tv for 8 of the previous nine years. We ditched it after realizing we'd just turn it on in the evening an vegetate unproductively until bed time. Last year we bought a house that came with a TV and ended up purchasing a Roku for Netflix, Hulu, etc. That covers everything but live sports.
Strange Loop 2011, "We really don't know how to compute", Gerald Sussman (http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...).

That's not entirely fair though, insofar as I'm a sucker for a Sussman talk.

Beyond that, I'd have to say Sam Aaron's Overtone presentation @ Clojure Conj 2011.

At the moment, it strikes me that the obvious common element of both talks and speakers are their absolutely infectious enthusiasm for their respective specialties.

jozeph78
Sam crushed it. It felt like a TED talk. Hickey and Sussman at STLoop were both awesome, but Sam takes the cake. I hope the video goes up soon.
bldurling
Sam is a complete rock star. One of the few people I know who can get showstopper applause in the middle of a talk.
Oct 27, 2011 · 181 points, 30 comments · submitted by puredanger
iradik
In an older talk at hp Sussman's view is we are at the alchemy stage of understanding computer science. Hence why programmers are often compared to wizards. I guess not much has changed in 20 years.
pyrtsa
Anybody noticed how closely this, being from the programmer's point of view, is related to what Bret Victor [1] has been talking about from the UI perspective? Take http://worrydream.com/#!/ScrubbingCalculator, or any of the "Kill Math" or "Explorable Xplanations" articles, for an example.

Combining a good UI to the propagators as explained by Sussman in this video, would make a disruptive product on any field where decision making is needed. (Not like such tools didn't exist already in the hands of some, of course.)

[edit] fix typo

msutherl
CNMAT's o.dot library for Max has a similar notion. As messages pass through the systems and affect new results, the resulting message preserves its entire history.
alxv
I have got the same feeling. It is this idea of moving away from the pencil-and-paper operating system for modeling complex systems. Victor is more about making computing explorable and accessible to the masses, whereas Sussman is more about figuring out new ways to compute things. However, both are basically arguing that our current abstractions are not enough anymore. That is, we need to revisit some of the assumptions that were made 40 years ago about how programs should be structured.
gtrak
It's too bad we only saw his face and not his interaction with the slides, watching him explain the circuit and math examples live was like watching an olympic gymnast.
puredanger
He originally asked to use an old-school transparency projector and we had a camera-enabled one ready for him but he ended up creating actual slides. I kind of wish he'd actually had the transparencies!
stralep
I was happy that Gerald Sussman shares my fear of floating numbers :)

(because I'm not alone, not because he's afraid)

[edit] clarification

CountHackulus
I love listening to Sussman. You can just hear the passion in his voice.
CoffeeDregs
Generally, awesome. But:

The 1GB for a human thing, whether or not it's off by a factor of 10, is the cost to build an infant and dismisses the real complexity of a human. The cost to build a high functioning adult is vastly higher. I don't know what it would cost to build me now (complete with screwed up kidney!), but I'm sure that it's quite a bit higher than 1GB: I've learned English, Spanish, love (or so my wife would say), loveV2 (or so my kids would say), basically every computer language, how to catch a football, how to WALK, how to have sex, how to have a conversation over cocktails, etc.

The magic of computers is that once N programs have run through the process of learning to do something, we can clone it. Getting to "how to speak English" is going to be hard; building 1e9 machines to do it will be relatively easy.

robotresearcher
The 1GB was explicitly referred to as your ROM. Sussman does not talk about the streaming data that hits you at run time.
CoffeeDregs
meh... He referred to the 1GB a couple of times and I didn't hear he talk about how humans are much more complicated. Also [as an atheist], my point is not to argue against Sussman, but is to point out that calculating the storage requirements of a human is complicated. Putting aside all the guts, walking, eating, etc, we store an immense amount of information. What is incredible is that 1GB is what is required to specify the creation and evolution of the data structure necessary to become adult (and to die).
kenjackson
I really didn't like his 1GB example in the beginning. You can't compare humans to computer programs in terms of capability. Computer programs do very different things. Sure we can spot the missing triangle faster, but we can't sort 1M names very quickly.

And on flexibility of the human source code -- sure a small change results in a cow. But a small change in Windows (default registry settings) can make Windows start in its standard shell, command line, safe mode, Media Center mode, etc...

While I can be in awe of the complexity and power of living organisms, I don't think its all that useful to compare them to programs -- at least not based on our current understanding of biology.

swannodette
I think you're missing the point. Computer programs are processes. There's a whole lot of amazing biological processes out there. Why are current human formulations about the nature of process (programs) so brittle compared to these other processes we observe? It's a humbling talk. Our notions of computation are somewhere at the primordial soup phase.
kenjackson
They're more brittle because they can be. For example, in computers loading and execution of a program generally occur w/o transcription error.

With that said there are things about computer programs that are hard problems in human biology (and note the analog is really more of an OS to an animal -- the animal is a set of processes, not just one process). I can easily use libraries in my current program. Transplants are still non-trivial in humans. I can kill my shell and it will come back. I can even hit an unrecoverable error, reboot and things will usually still work fine. I can probably remove half of the files on my compuer and it will still work fine. I can hibernate my computer, store the state and send it to different piece of hardware. I can take an image of my machine and clone it to 100 other machines. I can add new features and upgrade my OS -- generally can't do that to my body -- at least not in any satisfactory way.

Sure there are some animals for which there are non-necessary components, just like in operating systems. But if you remove a heart or the lungs or the brain from most animals -- they'll die. Cancers can kill most animals -- there's no real equivalent to operating systems. There is generally nothing that will flat out kill an OS.

bdonlan
> Transplants are still non-trivial in humans.

The biggest problem behind human transplants is that our defense mechanisms (which is a hard problem in computing!) will kill the foreign material. So it's a trade-off, not a hard problem.

In any case, the real difference between biological DNA and programs is the latter is designed to be modified in a _directed_ way. You could think of DNA as a highly compressed program which is modified _in its compressed form_. In evolution, changes are made randomly, so this isn't really a problem - if anything, the magnification effect is a good thing. But in computer programs, we know what we want to change, and don't want to have to make several million random changes to try to find one that brings us closer to the goal. And so computer programs are more brittle - small changes have small, predictable effects - while human DNA is more flexible, but at the expense of predictability.

contextfree
Swapping out libraries still isn't entirely trivial in programs either (DLL/dependency hell, etc.)
lukeschlather
The OS is analogous to electrical impulses in the animal brain. The real thing that separates animal brains from computers is the lack of persistent storage media that retain information when you remove power or other operating components.
gtrak
He's also saying our programs can't be un-brittled yet, and we want to solve more dynamic problems.
onemoreact
There are plenty of programs written based on the assumption that there will be tuns of memory errors. Where systems can not only detect problems automatically but try a range of solutions to fix the HW problems without intervention. But it's more a question of cost to develop vs deploy. If your sending a probe to Saturn or sending 100 million devices in the field to monitor power transmission without interruption for years you build a vary different system vs severs that can be monitored by people.

PS: Don't forget your DNA is a single program that's been running continually for over 3 BILLION years because at no point did any of your ancestors die before having offspring.

andrewcooke
this seems vaguely like the kind of thing that hofstadter was working on with copycat etc (particularly if you think of hofstadter's version as being more complex because he didn't have the computing power to run many things in parallel and so needed higher level control to allocate resources).
Karellen
"we are way behind in creating complex systems such as living organisms"

Well, yes, but the genome has a ~4,000,000,000 year head start.

wingo
Non-flash link?
gregsadetsky
Don't know about the video, but the slides can be downloaded here in PDF: https://github.com/strangeloop/2011-slides/raw/master/Sussma...
gtrak
you can set your user agent to ipad to get an mp4
tednaleid
http://d1snlc0orfrhj.cloudfront.net/presentations/11-sep-wer...

You can get it with Chrome by launching with the iPad user agent and looking at source. Here's a gist with an alias for launching chrome on OSX with the iPad user agent.

https://gist.github.com/1313794

Leynos
I just used wget, and it seemed to work. Thanks.
wingo
Thanks!
hakl
The page sends the filename to desktop browsers base64 encoded. Cute :).
logjam
Links to background of some of the work Gerry described in his talk:

http://dspace.mit.edu/handle/1721.1/44215

http://groups.csail.mit.edu/mac/users/gjs/propagators/

puredanger
One of my favorite talks from Strange Loop... Sussman has amazing breadth across several domains (math, physics, electrical engineering, computing) as well as a long perspective on where we've been and where we can go. He (and Julie) were a true joy to have at Strange Loop this year.
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.