HN Books @HNBooksMonth

The best books of Hacker News.

Hacker News Comments on
Mindstorms: Children, Computers, And Powerful Ideas

Seymour A Papert · 27 HN comments
HN Books has aggregated all Hacker News stories and comments that mention "Mindstorms: Children, Computers, And Powerful Ideas" by Seymour A Papert.
View on Amazon [↗]
HN Books may receive an affiliate commission when you make purchases on sites after clicking through links on this page.
Amazon Summary
In this revolutionary book, a renowned computer scientist explains the importance of teaching children the basics of computing and how it can prepare them to succeed in the ever-evolving tech world. Computers have completely changed the way we teach children. We have Mindstorms to thank for that. In this book, pioneering computer scientist Seymour Papert uses the invention of LOGO, the first child-friendly programming language, to make the case for the value of teaching children with computers. Papert argues that children are more than capable of mastering computers, and that teaching computational processes like de-bugging in the classroom can change the way we learn everything else. He also shows that schools saturated with technology can actually improve socialization and interaction among students and between students and teachers. Technology changes every day, but the basic ways that computers can help us learn remain. For thousands of teachers and parents who have sought creative ways to help children learn with computers, Mindstorms is their bible.
HN Books Rankings
  • Ranked #14 this year (2021) · view
  • Ranked #18 all time · view

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this book.
Fine article, and what it (tongue-in-cheek) calls “cheating” is just what us self-taught automators call “making the machine do all the crapwork for you”. It’s just unfortunate that the greater tooling and culture currently available is such a sprawling hostile ballache that even the most enthusiastic cheater will be driven to conclude that this shit would be (and likely is) quicker and easier just to do by hand.

The foundational mistake is “teaching programming”. The goal should be to instill (“teach”) critical thinking and analytical problem solving skills, and a “programming environment” just another tool, like pencil and paper, which the student can use when exercising those skills on real-world problems.

Whereas “teaching programming” is teaching language features: what all the buttons are and what they do when you push them. Thus mastery of button-pushing becomes feted as the end-goal of itself, instead of being just some tiresome but necessary tool-practising crapwork (like memorizing the ten-times tables and drawing all the letters from A to Z) that you have to go through on the way to achieving your true goals (which can be anything).

Once again, I point to Papert’s Logo[1] as a good demonstration of just how simple that PE can—and should—be to serve that purpose. Logo’s core concepts can be communicated in just three steps:

1. This is a Word.

2. This is how you Perform words.

3. This is how you Add your own words.

Anything else that the platform provides, such as its dictionary of pre-defined words, can and should be explorable and discoverable; something today’s hardware and software can support and encourage without blinking. Let the students teach that crap to themselves if/as/when they need it, and keep the adults on hand just to observe when students start running themselves down a dead-end and prompt them to other possibilities they had not realized/considered.

Oh, and it really should go without saying that the PE’s error messaging must be the top of their class. Because errors aren’t the “wrong answers” of which a student should feel embarrassed and ashamed, but fresh questions in their own right which spark awareness, exploration, self-correction, and insight.



A lot of people hold this belief that knowing how to do X with an "incorrect form" is worse than not knowing at all, if you want to progress at doing X.

In programming we have debugging. You have a program that does X, but with some bugs. You later improve the program by removing the bugs.

Why can't we do this in "real life" as well? You learn how to add multi-digit numbers from right to left. You then later relearn that by going from left to right. You learn to swim with your head above the water, then later learn to keep your head in the water, and turn it every two strokes to get a quick breath.

In fact, I read about this concept of "debugging" bad habits exactly in the context of juggling. Seymour Papert covers this in Mindstorms [1], p 111. He explains that the most common "bug" that prevents people from performing 3-ball juggling is following one ball with the eyes. Once you are aware of that, you the fix is quite easy: keep your eyes pointed at the apex of the ball's trajectory. In a later chapter he goes on to say that other things can be "debugged" as well; one example is relearning skiing to replace a v-type position to a parallel ski position.


I am a pro artist and IMHO almost every single artist in the world probably started by learning terrible habits that they had to painfully unlearn. In modern times an astounding number of them (myself included!) have a period where they resist this painful process with cries of "It's my style!".
So, do I learn to draw boxes and rectangles in perspective first and then draw people from boxes as Loomis suggests?

Or is that just one of possible approaches? Or does pretty much everyone draw from boxes?

For programmers, I like to make the analogy that learning to draw is like building a 3d renderer on your wetware.

You start by drawing boxes. And balls, and tubes, and eggs, and cones, and prisms, and a bunch of other shapes that are simple enough to describe in a few brief lines of code. Get good at them, learn to draw them from a lot of angles, learn how to think about them as three-dimensional shapes and how light plays across them.

Then start laying out rough, crude versions of things using these primitive shapes. What you use for a particular thing depends on what you're drawing and what suits your approach. Cars are big boxy things, maybe big wedge things if they're really areodynamic. People are mostly collections of long tubes, though some parts can get very boxy, eggs are helpful for some ways of constructing skulls too. A lot of people go through a phase where they like to draw stick figures with balls at the joints, I've never really been a fan of that and find it tends to result in stiff figures, but some people love it. Sausage people, box people, ball-and-stick people, there's a lot of ways to approach this and a pro will have played with them all and found out which one works best for them most of the time, and which ones work best for them in situations where their favorite way breaks down.

You work out a pose this way, as a bunch of sticks and balls and boxes and whatnot, then you have a solid framework to work on top of and sort of "carve" into a more realistic shape by applying your knowledge of anatomy. Which is a thing that takes multiple years of study to acquire, human bodies are complex things!

(Boxes are especially useful because there are some simple tricks you can use to make it easy to take a flat view of something and project it into perspective - if you draw an X from corner to corner on the face of a box in perspective, then you can draw a line that goes through the center of that X and lines up with the same vanishing points the sides are on to divide the face in two in perspective, then use a grid built up that way to transfer a head-on drawing into perspective and work from there.)

Eventually, as you progress as an artist, you can do more and more of this in your head. Most of the time I just lay down some really sloppy, loose shapes to plan out a pose, with a lot of parts going pretty quickly to a recognizable caricature of that body part that I can quickly turn into something good-looking when I come back and throw down some loose solid color shapes that I quickly refine into something with an appearance of anatomy, then come back later and add some shadows/highlight to really bring out the forms. I'll put out a little bit of cubes/balls/eggs/cylinders/etc when I really need to think about a weird angle, but every time I do this a little of this lingers in my head for next time, and drawing that angle again becomes something I can kind of... pull out of cache, so to speak, because I remember all the thinking I had to do on the page last time.

Loomis' books are super solid and have a lot to teach you. Bridgman is some super useful reference for anatomy too. But the teaching that really helped me the most was a life drawing for animation class whose instructor was working out of the Vilppu drawing manual, that stuff is amazing and will help to keep you thinking about how to instill a sense of life into all your work from the ground up.

The same stuff applies to simpler cartoon characters, too. You just use different proportions and don't spend as much time trying to nail down anatomy that isn't absolutely necessary to the story the drawing is telling.

I identify with this quite deeply.

One thing that your idea prompts is thins: as I have gotten better at learning things I have gotten better at just adopting as-close-to-perfect form as I can from the get go.

When I learned to play guitar at age 20, I had horrible form, and I did go through a period of unlearning habits (after a period of trying to have a "style" LOL).

When I learned to play pedal steel guitar in my late 30s, I was careful to start with good habits from the get-go. Same with snow skiing, banjo, and yoga. :D

I dunno how I'd approach this lesson when dealing with younger folks... it was a painful process, but learning that starting with good habits/ form makes things so much faster and easier is maybe just a thing people have to experience on their own.

This is very true! After I'd mastered drawing to the point where I can pay my rent doing it, it became a lot easier to learn other stuff.

My pole dance teacher once pointed out how different my style of training was: most students would try a difficult new move a few times, then go back to practicing stuff they were ultra-confident at for a while, while I'd be more likely to keep on trying the new move with a bunch of different little variations, and to pay a lot of attention to her when she'd come over and point out things I was doing wrong, especially if it was a wrong thing that would make me more likely to hurt myself!

I kinda feel like I can do this because I remember how I improved my art by the long, painful process of analyzing what I was doing wrong. And also because I have much less ego invested in the new thing - I already have a thing I can do the heck out of, I don't give a shit if I look like a bumbling beginner when that is what I am.

"How to learn" is a skillset, which you have to learn along with everything else you learn in the first two or three decades of your life. Once you have it down it's a lot easier to learn stuff if you're willing to put the energy into doing it right.

I learned to swim on my own as a little kid. 30 years later, I decided to join swimming classes; I saw that swimming is extraordinarily complex. There are too many things to learn at the same time for someone to be able to pay attention and learn proper form for all of them. Inevitably, you'll learn proper form for one thing, and incorrect for many others, then, with one good habit in the bag, you can start focusing on the next one, then the next one, then the next one. From time to time, you will fall back to the old habits for some certain part of the motion, so you'll need to revisit it, and debug it again.

Tom Brady, who many people consider the greatest quarterback in the history of American football, still has a throwing coach (Tom House [1]), and he's still debugging his throwing motion. After 20+ years of throwing in a professional league.

So, for sure, unlearning habits is difficult, but learning only proper form from the start is probably an exceedingly rare exception. I think for most people the process of learning will involve learning incorrect form first, and attempting to fix this later.


Programs aren't humans. They don't "remember" bad code and resist change.

Or, do they? If a program is built with a bad architecture, but "works" for all the inputs seen so far, it's much hard to fix than if it were built with good patterns from the start, even if it has some mistakes that need to be fixed.

For juggling in particular, it's also my experience that teaching complete novices is easier. People who've juggled a bit often listen less, and get frustrated with breaking things down, and just go back to doing what they know.

At the British Juggling Convention I taught a workshop for absolute beginners to pass 5 clubs following . Most of those who'd never picked up a club got on pretty well. Whereas some people who'd passed clubs in a different way were skeptical; you could tell their heart wasn't in it, and then unsurprisingly some of them didn't get it.

I do understand the resistance to going back to basics to fix things though. I can hoop (as in "hula-hoop") fairly well, and know some tricks. But I mostly hoop in one direction (counter-clockwise). If I try to do a trick clockwise, it's frustrating and I don't feel like carrying on, so I tend to give up, or go back to hooping counter-clockwise (fortunately one of my favorite tricks involves reversing the direction of the hoop). This weakness of mine was actually really useful (back when I still hooped with people). If I was showing someone else a trick, I could try it clockwise, which was an excellent reminder of how hard the trick really is, and to understand how/where it goes wrong.

Jun 05, 2020 · hhas01 on Homoiconicity Revisited
Nope. It’s not about “parsing”, it’s about representation.

Languages such as Python and C draw clear distinction between literal values on one hand and flow control statements and operators on the other. Numbers, strings, arrays, structs are first-class data. Commands, conditionals, math operators, etc are not; you cannot instantiate them, you cannot manipulate them.

What homoiconic languages do is get rid of that (artificial) distinction.

Lisp takes one approach, which is to describe commands using an existing data structure (list). This overloading means a Lisp program is context-sensitive: evaluate it one way, and you get a nested data structure; evaluate it another, you get behaviors expressed. The former representation, of course, is what Lisp macros manipulate, transforming one set of commands into another.

Programming in Algol-descended languages, we tend to think algorithmically: a sequence of instructions to be performed, one after the other, in order of appearance. Whereas Lisp-like languages tend to encourage more compositional thinking: composing existing behaviors to form new behaviors; in Lisp’s case, by literally composing lists.

Another (novel?) approach to homoiconicity is to make commands themselves a first-class datatype within the language. A programming language does not need swathes of Python/C-style operators and statements to be fully featured; only commands are actually required.

I did this in my kiwi language: a command is written natively as `foo (arg1, arg2)`, which is represented under the hood as a value of type Command, which is itself composed of a Name, a List of zero or more arguments, and a Scope (lexical binding). You can create a command, you can store it and pass it around, and you can evaluate it by retrieving it from storage within a command evaluation (“Run”) context:

    R> store value (foo, show value (“Hello, ”{$input}“!”))
    R> input (“Bob”)
    #  “Bob”
    R> {$foo}
    Hello, Bob!
Curly braces here indicate tags, which kiwi uses instead of variables to retrieve values from storage. (Tags are first-class values too, literally values describing a substitution to be performed when evaluated.)


When it comes to homoiconicity, Lisp actually “cheats” a bit. Because it eagerly (“dumbly”) evaluates argument lists, some commands such as conditionals and lambdas end up being implemented as special forms. They might look the same as every other command but their non-standard behaviors are custom-wired into the runtime. (TBH, Lisp is not that good a Lisp.)

Kiwi, like John Shutt’s Kernel, eliminates the need for special forms entirely by one additional change: decoupling command evaluation from argument evaluation. Commands capture their argument lists unevaluated, thunked with their original scope, leaving each argument to be evaluated by the receiving handler as/when/only if necessary. Thus `AND`/`OR`, `if…else…`, `repeat…`, and other “short-circuiting” operators and statements in Python and C are, in kiwi, just ordinary commands.

What’s striking is how much non-essential complexity these two fundamental design choices eliminate from the language’s semantics, as well as from the subsequent implementation. kiwi has just two built-in behaviors: tag substitution and command evaluation. The core language implementation is tiny; maybe 3000LOC for six standard data types, environment, and evaluator. All other behaviors are provided by external handler libraries: even “basics” like math, flow control, storing values, and defining handlers of your own. Had I’d tried to build a Python-like language, I’d still be writing it 10 years on.

There are other advantages too. K&R spends chapters discussing its various operators and flow control statements; and that’s even before it gets to its stdlibs. I once did a book on a Python-like language; hundreds of pages just to cover the built-in behaviors: murder for me, and probably not much better on readers.

In kiwi, the core documentation covering the built-in data types and how to use them, is less than three dozen pages. You can read it all in half an hour. Command handlers are documented separately, each as its own standardized “manpage” (currently auto-generated in CLI and HTML formats), complete with automated indexing and categorization, TOC and search engine. You can look up any language feature if/when/as you need it, either statically or in an interactive shell. Far quicker than spelunking the Python/C docs. A lot nicer than Bash.

Oh, and because all behaviors are library-defined, kiwi can be used as a data-only language a-la JSON just by running a kiwi interpreter without any libraries loaded. Contrast that with JavaScript’s notorious `eval(jsonString)`. It wasn’t created with this use-case in mind either; it just shook out of its design as a nice free bonus. We ended up using it as our preferred data interchange format for external data sources.

Honestly, I didn’t even plumb half the capabilities the language has. (Meta-programming, GUI form auto-generation, IPC-distributable job descriptions…)


Mind, kiwi’s a highly specialized DSL and its pure command syntax makes for some awkward reading code when it comes to tasks such as math. For instance, having to write `input (2), + (2)` rather than the much more familiar `2 + 2`, or even `(+ 2 2)`. Alas it’s also proprietary, which is why I can’t link it directly; I use it here because it’s the homoiconic language I’m most familiar with, and because it demonstrates that even a relative dumbass like me can easily implement a sophisticated working language just by eliminating all the syntactic and semantic complexity that other languages put in for no better reason than “that’s how other languages do it”.

More recently, I’ve been working on a general-purpose language that keeps the same underlying “everything is a command” homoiconicity while also allowing commands to be “skinned” with library-defined operator syntax to aid readability. (i.e. Algebraic syntax is the original DSL!) It’s very much a work in progress and may or may not achieve its design goals, but you can get some idea of how it looks here:

Partly inspired by Dylan, a Lisp designed to be skinnable with an extensible Pascal-like syntax, and also worth a look for those less familiar with non-Algol languages:

And, of course, by Papert’s Logo:

I recommend reading Seymour Papert's Mindstorms:

It gives you a powerful framework to think about learning in children (and adults), how they can learn programming, and how they can learn many other STEM and non-STEM topics using programming.

I'm sorry to say, but I don't believe a dismissive attitude can ever be described as professional.

I believe everyone can be taught to program, and the choice of language, semantics, and syntax has a profound effect on how far people can get, and what frustrations they face.

There are people out there with 0 formal training who run entire businesses on Excel, the most widely used programming language bar none (it's notable that in Excel, rows start at 1 and not 0. There is a reason for this). Ask a 7 year old to use C and they're not going to get very far. Give a 7 year old Logo, and they'll be writing programs with very little instruction, with results I've seen college freshmen struggle with.

I teach a summer robotics program to middle schoolers. We used to teach it in C++ because that's what the SDK came written in. In this mode, we spent most of the time getting them to think like the compiler, teaching them about memory layout, allocation, compiling, headers, preprocessors, etc. because they constantly ran into frustrations due to the design choices of C++. They never left the session with a firm understanding and confidence around programming because they spent all their time trying to build a model from scratch in their head without any relation to their own world.

Then we switched to Matlab. With one uniform data structure, a REPL, 1-based indexing, etc. they were much more comfortable, and they were able to make the robots do amazing things for their age. The most impressive thing I've seen is making a robot choir through writing a distributed protocol to synchronize the robots' notes. They were able to do this because the language, Matlab, got out of their way, which allowed them to focus on the task and relate it back to something they knew very well: music.

All I'm saying is this attitude of "Oh, you don't understand this thing we've built and these arbitrary limitations frustrate you, therefore you shouldn't even try it in the first place" is just toxic, given the evidence I've seen that people can learn and do amazing things if we give them a fighting chance.

Required reading on this subject:

– Economics / sociology –

A Farewell to Alms

Cartesian Economics

The 10,000 Year Explosion

The Righteous Mind


– Philosophy –

Tao Te Ching


– Autobiography –

Surely You're Joking, Mr Feynman

Recollections of Eugene Wigner

– Fiction –

Fahrenheit 451


> Dune

Do you mind explaining what's great about Dune (I have not read it yet, so maybe without major spoilers ...)?

It's a Messiah story set in the far future. I included it here because it had an impact on the way I understand history (I prefer to leave that a bit cryptic).

As a work of fiction I'd call it good but not great. But at the moment I can't think of a work of fiction I'd call great, so I'm probably not the best critic on that point.

Everybody seemed to hate the 1984 film adaptation by David Lynch but I think it's pretty good. The Syfy miniseries got much better reviews but I thought it was only so-so. The film doesn't really spoil the book, which is kinda cool, but may be easier to follow and more fun to watch after having read it. Last but not least, I really enjoyed the recent documentary Jodorowsky's Dune...

In addition to all that has been said, if you really want to understand what Logo is, what it teaches, why, etc - then you need to read about it from the man who invented it:

Want to know why Lego Mindstorms exists? Well...

That's the work you need to read - but really, learn about the man, learn about Logo. As others have noted, it's more than just turtle graphics - so much more. Unfortunately, educators still have not grasped his ideas fully, and if you look closely, what is often touted out there for teaching children and others programming - is essentially his ideas, reimplemented poorly.

He has written more on the subject than that one book; and his thoughts and ideas (and Logo itself) aren't really about teaching children programming, but teaching children how to think computationally, algorithmically. He saw how and where things were heading long before many others, and he worked to try to get people prepared. Sadly, all people grasped was turtle graphics, but not the larger picture.

I often wonder where we'd be today had more people truly understood and implemented his (and, to be honest, his "muse" / "mentor" / "inspiration", if you will, in Piaget) methods and thoughts on teaching. Most likely in a much better position as a society...

The beauty of Scratch and other similar tools is that instead of the teacher asking questions, the child learns to ask their own questions.

If you are interested in learning more about this mindset, you should read Mindstorms by Seymour Papert (RIP).

Scratch can be a "gateway drug" to languages that professional programmers use. The extensions/abstractions of Scratch from Berkeley that deal with making it do complicated things seem like putting a fish on a bicycle. Sometimes, you just have to leap and try to not fall.

You are so right! There is a significant movement in the education industry to move toward "Project Based Learning." Papert is one of the founders as I'm sure you know.

Here's a book I recommend if you're interested in learning more about this subject too:

I have been inspired by Papert/Little when building my curriculum to teach kids to code: . Would love to hear what you think :)

Mindstorms first edition is now freely available:
Thank you!
Dec 08, 2016 · kbouck on Lego Is the Perfect Toy
That would be amazing considering his relation to Seymour Papert [1], who:

- Co-invented Logo Programming language

- Authored "Mindstorms" [1]

- Collaborated with Lego to produce (Logo-programmable) Lego Mindstorms.

- Was made co-director of the MIT AI Lab by...... Marvin Minsky



Aug 01, 2016 · oulipo on Seymour Papert has died
Seymour Papert was an inspiring and caring researcher, and he will inspire many generations to come. His work was truly groundbreaking, subtle and profound, and I encourage everyone to read some of his books, notably Mindstorm and Children's Machine

Aug 01, 2016 · pfooti on Seymour Papert has died
Wow, RIP - he was a big inspiration to me professionally both in terms of the tech he helped build and in terms of the learning theory of which he was a proponent. (I also work at the intersection of education, learning sciences, and technology).

[0] is a great video from the early days of LOGO, and he's pushing notions of programming for all that felt new and revolutionary in the 2000s.

Mindstorms [1] is a great book if you're interested in his ideas about learning.



Putting aside all the ad-hominem and everything-is-terrible, I think I learned a lot from following the references Tef makes in this talk.

Some references (sorry for the formatting, if this becomes a thing I'll do the wiki and the logo):


Blub Paradox:

Perl and 9/11:


Waterfall (same pdf, linking from 2 sources):

Conway's law:

Unrelated, Pournelle's Iron Law of Bureaucracy (I just like this law):

X-Y Problem:

Atwood, Don't Learn to Code:

Wason selection task:


Amazon Links, no referral:

TL;DR: this, and this guy really does not like Jeff Atwood.
He doesn't like PG or Joel Spolsky either!
Apr 07, 2014 · vinalia on Teaching Devina to Code
It might be fun to look at LOGO (maybe UCBLogo[1], free books included) for a first programming language. This has a first-person (turtle) view on a GUI that you move around to make shapes and do math/physics. The idea is that when programming it will be easier for the programmer to associate themselves with the turtle and interaction/exploration in the language will be natural.

The Logo way is pretty different from conventional programming models because it was tailored to be more intuitive than conventional languages like C, JavaScript, or VB. It still offers access to complex, higher order programming concepts like algorithms, AI, automata, etc. Harold Abelson from MIT (SICP) wrote a cool book that covers math/physics in Logo, too.[2]

The creator of the language has an awesome book[3] on how computers can enhance pedagogy and someone wrote a cool blog post on programming for children that mentioned it too[4].





I don't have a direct answer for you (still researching), but if you haven't read Mindstorms by Seymour Papert [1], I highly recommend it. It's generally about how computing can help kids learn problem solving in a variety of contexts, including several bits about the LOGO programming language. It's from '88, so it is definitely dated, but many of the concepts are pretty timeless.


You might also find the free community edition of LiveCode[1] handy. At a glance, looks very much like HyperCard. There is also Microsoft's Project Sienna[2] if you are on Windows 8.

[1] [2]

Oct 11, 2013 · joelhooks on Natural born programmers
I've been reading Papert's Mindstorms[1], which is a discussion on math education and the genesis of LOGO. If this topic interests you, I highly recommend the book.


Also, your friend should read Mindstorms:

One of the major themes is the relationship children have with mathematics and ways teachers can change it.

Reminds me of Mindstorms:

In the intro of the book Seymour A. Papert describes how gears provided an early concrete framework that made understanding abstract mathematical concepts presented at a later point much easier to visualize and apply.

Oct 28, 2012 · tel on Why aren't we doing the math?
(Also at:

I have a thesis that the kind of thinking required to survive med school is diametrically opposed to the kind of thinking required to do statistics well. It's the "rote pattern matching" versus "mathetic language fluency" issue that's at the heart of things like Papert's Constructivist learning theory[1] and it really causes me to have little surprise at an article like this. Doctors are (usually) viciously smart people who have to make a wide array of difficult decisions daily, but to operate at that level requires an intuition around a lot of cached knowledge, something I feel to be basically the opposite of statistical thought.

I don't think this is unique, either. It's the heart of Fisher's program to provide statistical tests as tools to decision-makers[2]. It's an undoubted success in providing general defense against coincidences to a wide audience, but it casts the deductive process needed in a pale light.

I think a principle component of the computer revolution is to provide more people with better insight into mathetic thought. Papert focuses on combinatorial examples in children in Mindstorms[3] but I think the next level is understand information theory, distributions, and correlation on an intuitive level. MCMC sampling went an incredible way to helping me to understand these ideas and probabilistic programming languages are a great step toward making these ideas more available to the common public, but we also need great visualization (something far removed from today's often lazy "data viz").

Ideally, things like means and variances will be concepts that are stronger than just parameters of the normal distribution---which I feel is about as far as a good student in a typical college curriculum statistics class in a science or engineering major can go---but instead be tightly connected to using distributions accurately when thinking of complex systems of many interacting parts and using concentration inequalities to guide intuition.

I think the biggest driver of the recent popularization of Bayesian statistics is that distributions as a mode of thought is something quite natural to the human brain, but also something rather unrefined. People can roughly understand uncertainty about an outcome, but have a harder time with conjunctions or risk. How can we build tools that will teach people greater refinement of these intuitions?

[1] [2] [3]

Medicine took a huge leap forward around 1800 or so when people started collecting statistics on what worked and what didn't. Evidently the doctors' intuition was very, very wrong.
Impossible. The whole lesson of statistics is that computing probabilities is an intricate process. It will never be intuitive. I learn to throw a ball at a target on intuition, but I will never learn to launch a rocket at Mars on intution.

At best, it can become intuitive to ask the right skeptical questions when being shown a claim.

"Impossible" seems a broad claim - I don't see why it shouldn't be possible to put the information in a form, possibly decorated with details from a rigorous analysis, that makes pattern matching work. If the pattern matching is otherwise proving effective (itself an empirical claim, to be sure), we should be careful about teaching doctors not to pattern match.
That's an interesting viewpoint that I'd love to discuss more. I disagree, obviously, but want to know why you feel so strongly that statistical thought is intuitively impossible?

I feel like it's closely related to combinatorial thought. To again steal an example from Papert, he often talks about asking children to count the numbers of possible pairs of colors of marbles given to them. With some formal training it's easy to visualize and pare down to the right information, and it's also easy to visualize the process. Given a variety of colored marbles, I imagine you could easily estimate the magnitude of colored pairs possible. Children cannot and must learn to think that way at a certain point.

In the same way, conceptualizing uncertain events in the larger space of things that could happen and becoming familiar with the extents and limitations of the casual models we all use is a way of thinking that takes a great deal of effort (today) to come to have, but feels intuitive once you do have it. I believe that there's nothing inherently impossible about teaching it if the appropriate tools are available.

Med/Law = pattern recognition machines to detect statistical regularities. You show them a plane. Then give them another object and ask them whether or not it is a plane.

Math/Eng/Science = use of pattern recognition over a multiple of composable machines to create something new. You show them a combustion engine, steel frames, gears and vulcanised rubber wheels, then they connect it to the invention of bikes/trains to make a car.

Or "pattern recognition" versus "model building".
Disclaimer: I'm a teacher at Dev Bootcamp ( Everything I say comes from love -- if I didn't care about this deeply I wouldn't bother to remark in the first place. :)

Very nice! We can always use more absolute-beginner tutorials in the world.

Just started the Ruby tutorial:

"Great! So what just happened here? To begin with, let's define what you typed 2 + 2 as instructions. The ruby console reads your typed instructions, interprets it, and then acts on the instructions the way you gave it. This step is called evaluation. Instruction when complete are called an expression. If the instructions you have typed are incomplete (for example, if you typed 2 + and hit enter), this console will show an error message. The instruction you typed, 2 + 2 is an expression. It is important to remember that evaluation of an expression always returns a value. This value is what is shown as result on the console. In this case, 4 was the result of the evaluation."

If someone honestly knows nothing about coding, you've probably just lost them right there. Questions a beginner would ask:

* What is a "ruby console?"

* What instructions? I typed "2+2."

* What does "interprets" mean?

* How can instructions be complete? Were they ever incomplete? How do I know when they're complete?

* "Returns?" "Value?" "Result?"

In a few sentences, with jargon highlighted:

I type instructions into a console which turn into expressions when they're complete. The console evaluates the expression and returns a value, which is shown as a result on the console.

A beginning programmer won't have the mental models necessary to make sense any of that. It's context-free. Most of those words have no prior associations, and those that do will lead them astray. Metaphor is your friend.

For reading, I recommend Bret Victor's essay on Learnable Programming: ( and Mindstorms by Seymour Papert (

Thanks for the feedback! We're always iterating on our course content, so we'll keep this in mind.
Read Mindstorms before you do. :)
I second this. For anyone trying to teach computers (and anything else), this will save you from repeating all the mistakes that all the other computer programming education startups are making/will make in the future.

Again, just to be clear, this book is really really really really important.

Hope that landed. :)

Mindstorms? I did a quick google and there are assorted books with that title (after you filer out "lego"). None seemed to be relevant to this discussion.

Who wrote it? What year?

I linked to it at the end of my comment, where I also mentioned who wrote it.

Here it is, again:

Lego Mindstorms are named after it.

please stop using "iterating" in this way
Seymore Papert, in Mindstorms:

"By deliberately learning to imitate mechanical thinking, the learner becomes able to articulate what mechanical thinking is and what it is not. The exercise can lead to greater confidence about the ability to choose a cognitive style that suits the problem. Analysis of "mechanical thinking" and how it is different from other kinds and practice with problem analysis can result in a new degree of intellectual sophistication. By providing a very concrete down-to-earth model of a particular style of thinking, work with the computer can make it easier to understand that there is such a thing as a "style of thinking". And giving children the opportunity to choose one style or another provides an opportunity to develop the skill necessary to choose between styles. Thus instead of inducing mechanical thinking, contact with computers could turn out to be the best conceivable antidote to it. And for me what is the most important in this is that through these experiences these children would be serving their apprenticeships as epistemologists, that is to say learning to think articulately about thinking."

Astounding; this quote nails my experience. Learning to program de-programmed the mechanical, "proceed until error" non-thinking conditioning that plagued my entire adult life.
Superb, that's going straight into my text file of wise words.
More here:
I know I keep beating that horse.

But Seymore Paperts book "Mindstorms: Children, Computers, And Powerful Ideas"

Is great

Seymore Paperts Mindstorms program. Read the book it's brilliant.

Read Seymour Paperts book Mindstorms

The main point is to have children do something they understand from the real world and have a physical relationship with. That way it won't feel as abstract.

Learning to build or repair a car would probably improve your understanding of thermodynamics, aerodynamics, momentum, etc. Likewise, writing a computer program that simulates the motion of a planet around a star or renders 3D graphics might improve your understanding of classical mechanics and any number of topics in math, just to name a few examples; cf.

This is not to mention that learning how to program a computer is just another tool to put in your bags of tricks for solving problems in any of the domains you mentioned (some better suited than others, of course).

HN Books is an independent project and is not operated by Y Combinator or
~ [email protected]
;laksdfhjdhksalkfj more things ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.