HN Books @HNBooksMonth

The best books of Hacker News.

Hacker News Comments on
On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines

Jeff Hawkins, Sandra Blakeslee · 14 HN comments
HN Books has aggregated all Hacker News stories and comments that mention "On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines" by Jeff Hawkins, Sandra Blakeslee.
View on Amazon [↗]
HN Books may receive an affiliate commission when you make purchases on sites after clicking through links on this page.
Amazon Summary
From the inventor of the PalmPilot comes a new and compelling theory of intelligence, brain function, and the future of intelligent machines Jeff Hawkins, the man who created the PalmPilot, Treo smart phone, and other handheld devices, has reshaped our relationship to computers. Now he stands ready to revolutionize both neuroscience and computing in one stroke, with a new understanding of intelligence itself. Hawkins develops a powerful theory of how the human brain works, explaining why computers are not intelligent and how, based on this new theory, we can finally build intelligent machines. The brain is not a computer, but a memory system that stores experiences in a way that reflects the true structure of the world, remembering sequences of events and their nested relationships and making predictions based on those memories. It is this memory-prediction system that forms the basis of intelligence, perception, creativity, and even consciousness. In an engaging style that will captivate audiences from the merely curious to the professional scientist, Hawkins shows how a clear understanding of how the brain works will make it possible for us to build intelligent machines, in silicon, that will exceed our human ability in surprising ways. Written with acclaimed science writer Sandra Blakeslee, On Intelligence promises to completely transfigure the possibilities of the technology age. It is a landmark book in its scope and clarity.
HN Books Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this book.
For people wanting to look into HTM (Hierarchical Temporal Memory), do check out Numenta's main website [1], in particular the papers [2] and videos [3] sections.

Otherwise, HTM inventor Jeff Hawkins' book "On Intelligence" is one of the top 3 or so most fascinating books I've ever read. It doesn't cover HTM though, just how the brain works at a conceptual level, but in a way I haven't seen anyone else explain. Jeff clearly has an ability to see the forest through the trees in a way that is not too commonly found. This is one of the reasons I think HTM might be on to something, although it of course has to prove itself in real life too.

But we should remember for how long classic Neural Networks was NOT overly successful, and almost dismissed by a lot of people (including my university teacher who was rather skeptical about them, when I took an ML course on like 12 years ago and personally believed a lot in them). We had to "wait" for years and years until enough people were eventually throwing enough work on finding out how to make them really shine.

[1] https://numenta.org/

[2] https://numenta.com/neuroscience-research/research-publicati...

[3] https://www.youtube.com/user/OfficialNumenta

[4] https://www.amazon.com/Intelligence-Understanding-Creation-I...

Edit: Fixed book link.

querez
> we should remember for how long classic Neural Networks was NOT overly successful

We already knew in the late80s/early 90s that neural networks were universal function approximators, and there was an era in the late 80s/early 90s where neural nets were VERY successful (or at least: very influential among the ML circles of their day). Sure, they were dismissed once kernel machines came about, simply because those had more to offer at the time. But it would be a mistake to compare HTM with classical neural networks: neural nets were always known to do something sensible and to "work", even if they might not be the state-of-the-art method.

In stark contrast, HTM has been "out there" for over long a decade by now, with (as far as I know) not a single tangible result, neither theoretical nor practical. They never managed to hobble together even a single paper with credible results, even though they came out with it right at the time where connectionist approaches became popular again (yes, there were papers, but there's a reason they only got published in 2nd or 3rd tier venues). From where I stand, it's a "hot air" technology that somehow seems to stay afloat because the person behind it know how to write popular science books. Everyone researcher I know who tried to make HTM work came away with the same conclusion: it just doesn't.

eihli
"Everyone researcher I know who tried to make HTM work came away with the same conclusion"

Is there anything you can share with this? I'd like to read more about how and why researchers came to that conclusion. Thanks.

None
None
musingsole
I was a graduate researcher implementing HTM in a hardware accelerator. The largest problem is there was never any sort of specification for HTM outside of a white paper that only vaguely described it's internal structures. And the picture the white paper painted is a design with N^N different hyperparamaters.

Oh, and then the whole thing output a jumble of meaningless bits that had to be classified, an algorithm Numenta kept hidden away as the secret sauce...but if you have to use a NN classifier to understand the results of your HTM... Too many red flags of snakesoil. And I really wanted it to work. Doesn't help that Jeff Hawkins has largely abandoned HTM for new cognitive algorithm pursuits.

I'm curious how close the research community is to general AI

Nobody knows, because we don't know how to do it yet. There could be a "big breakthrough" tomorrow that more or less finishes it out, or it could take 100 years, or - worst case - Penrose turns out to be right and it's not possible at all.

Also, are there useful books, courses or papers that go into general AI research?

Of course there are. See:

https://agi.mit.edu

https://agi.reddit.com

http://www.agi-society.org/

https://opencog.org/

https://www.amazon.com/Engineering-General-Intelligence-Part...

https://www.amazon.com/Engineering-General-Intelligence-Part...

https://www.amazon.com/Artificial-General-Intelligence-Cogni...

https://www.amazon.com/Universal-Artificial-Intelligence-Alg...

https://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0...

https://www.amazon.com/Intelligence-Understanding-Creation-I...

https://www.amazon.com/Society-Mind-Marvin-Minsky/dp/0671657...

https://www.amazon.com/Unified-Theories-Cognition-William-Le...

https://www.amazon.com/Master-Algorithm-Ultimate-Learning-Ma...

https://www.amazon.com/Singularity-Near-Humans-Transcend-Bio...

https://www.amazon.com/Emotion-Machine-Commonsense-Artificia...

https://www.amazon.com/Physical-Universe-Oxford-Cognitive-Ar...

See also, the work on various "Cognitive Architectures", including SOAR, ACT-R, CLARION, etc,

https://en.wikipedia.org/wiki/Cognitive_architecture

"Neuvoevolution"

https://en.wikipedia.org/wiki/Neuroevolution

and "Biologically Inspired Computing"

https://en.wikipedia.org/wiki/Biologically_inspired_computin...

hhs
These are useful references, thanks.
I suggest you read this book “On Intelligence” by Jeff Hawkins on similar topic:

https://www.amazon.com/Intelligence-Understanding-Creation-I...

arxpoetica
Jeff Hawkins dismisses the metaphysical in his theories. Is this true of the neuroscience world in general?
_Schizotypy
Well depending on what is defined as "metaphysical" there is scant if any evidence. Scientists tend to dismiss things when there is no supporting evidence.
taneq
Has anyone managed to do anything serious with HTMs yet? They talk them up something massive but no-one seems to use them for anything.
stiangrindvoll
Thanks for the suggestion, ordered it ;)

Related video by the author that was also interesting: https://www.ted.com/talks/jeff_hawkins_on_how_brain_science_...

Sources: 1. On Intelligence: http://www.amazon.com/On-Intelligence-Jeff-Hawkins/dp/080507... 2. https://www.quora.com/Whats-it-like-to-have-a-150-IQ-Is-life... 3. Some article shared in the MENSA group years ago I'd have to dig in to find.
You should read the book On Intelligence by Jeff Hawkins. Pay attention to Chapter 6, "How the cortex works". Our "consciousness" is derived of the same stuff in all animals, we just have more cortical layers. We also have fuzzy algorithms which allow the brain to recognize patterns, and associate x with y.

http://www.amazon.com/On-Intelligence-Jeff-Hawkins/dp/080507...

I think to postulate that there's a yet undiscovered subatomic elementary particle that gives rise to consciousness is hogwash. "Consciousness" or awareness of self has been shown in other animals. Humans are distinct in their ability to couple self-awareness and toolmaking.

There is no "human energy" it's the same material as in all other animals, we just have more of it. The "human energy" could be classified as distinct fuzzy algorithms found in humans which aid pattern recognition.

avera
Not one elementary particle, but whole interaction of many of them.

Yes, animals have that same substance, but at lesser scale.

I make distinction between pure intellect, which computer can do fine, and other component, which I doubt that it can be accessed by computer without connecting silicon with biological material.

There's nothing about the idea of physical consciousness that says it has to be a continuum -- there could just be some critical mass or qualitative attribute of brains that puts us "over the threshold", so to speak. Nobody can give any kind of a definitive answer. For ideas about a "continuum" of consciousness, you might read Phi:

http://www.amazon.com/Phi-A-Voyage-Brain-Soul-ebook/dp/B0078...

Or for other views, you might check out V.S Ramachandran (neuroscience): http://www.amazon.com/Brief-Tour-Human-Consciousness-Imposto...

Jeff Hawkins (computer science): http://www.amazon.com/On-Intelligence-Jeff-Hawkins/dp/080507...

Hofstadter (mathematics, cognitive science): http://www.amazon.com/G%C3%B6del-Escher-Bach-Eternal-Golden/...

Those are some of my favorite popular-press books on the subject.

astrocyte
* Read Jeff Hawkins (On intelligence)

* Have GEB (Covered enough of it)

My point was to bring out the implicit belief that there is :

> Some critical mass or qualitative attribute of brains that puts us "over the threshold"

> If consciousness is indeed a variable quantity, then every single able bodied adult human has more "units" of consciousness than, say, a dog.

> The variation within a species is also probably pretty small compared to the gap between species.

None of which are proven in any scientific way yet many believe it to be the truth. We haven't even resolved what consciousness let alone its range of existence.

> Just like intelligence, the amount of consciousness that a person has is not a measure of their value.

And yet, one draws lines to distinguish human intelligence/consciousness from that say of a dog.

Feb 20, 2014 · applecore on AI
If you're interested, you should read On Intelligence[1] by Jeff Hawkins (inventor of the Palm Pilot). In it, Hawkins presents a compelling theory of how the human brain works and how we can finally build intelligent machines. In fact, Andrew Ng's Deep Learning research is built on Hawkin's "one algorithm" hypothesis.

[1]: http://www.amazon.com/On-Intelligence-Jeff-Hawkins/dp/080507...

Aug 22, 2013 · Fuzzwah on Don't Fly During Ramadan
Memory is a funny thing. You don't recall a snap shot of all the details at once. You start telling a story and during the playback of the memories you'll be able to recall deep details.

Reading On Intelligence really solidified my thoughts on how memory (probably) works.

http://www.amazon.com/On-Intelligence-Jeff-Hawkins/dp/080507...

jff
Well, as court witnesses often show, you start telling a story and you fill in deep details. Sometimes from memory, sometimes from thin air.
Fuzzwah
Very good counterpoint. I completely agree. I did say it was a funny thing.
Aug 08, 2013 · chetan51 on Emergent Intelligence
Actually, there's a growing amount of evidence that there's a single, general-purpose algorithm in the human brain that gives rise to intelligence. For one, there's the fact that every part of the brain looks and behaves the same. There's also the fact that the brain is very plastic in what it learns – the auditory cortex can learn to "see" if we were to rewire the signals from the eyes from the visual cortex to the auditory cortex. It's very unlikely that our brain is hard-wired to recognize faces, for instance, but rather that it learns to do so using this generic learning algorithm.

I urge you to watch Andrew Ng's talk that I linked to in the post, and read On Intelligence (http://www.amazon.com/On-Intelligence-Jeff-Hawkins/dp/080507...) by Jeff Hawkins, a book that totally changed the way I look at intelligent behavior.

oh_teh_meows
Yep I've seen his talk. It's quite fascinating. However, what you're talking about is a learning algorithm, which does not necessarily equate with intelligence. OpenCyc would be the best example that illustrates my point. Edit: on second thought, you probably meant to say that given such a general purpose learning algorithm, and a suitable environment, the algorithm would in time learn enough to produce intelligence of some kind (of what kind, I'm not sure) that's capable of thinking. In that case, I agree with you, and I'll have to revise my opinion, but I'm still not sure if it qualifies as emergent phenomena from simple rules. An analogy would be Google's search algorithm running on huge amounts of data. Would you call the search results an emerging phenomena from simple rules?
chetan51
The most concrete version of my point is that I don't think the most powerful AI we'll create will have, for instance, a human-coded algorithm for detecting faces. Instead, it'll have the ability to read electrical signals from a camera and understand the changing patterns in them, including the presence of faces. This ability to understand changing patterns would be due to "simpler" rules than the rules specifically designed to understand faces.

So yes, a general purpose learning algorithm, using the correct paradigm, would learn to think in a way as powerful as we do. And it'll do so in a way that its programmers would never be able to predict.

In the same vein, I would say that Google search results is an emerging phenomena, albeit not quite as interesting as general purpose intelligence. This is because it's intractable to predict what Google will return for certain queries, even if we know all of its rules. Keep in mind that there are degrees of emergence, it's not black and white. (On the other hand, I don't think Google's algorithm is as "simple" as it originally was, but that's for another discussion.)

There is a very quick reference to the person who inspired him, Jeff Hawkins, whose book is worth a read:

http://www.amazon.com/On-Intelligence-Jeff-Hawkins/dp/080507...

Edit: update link

netrus
Link does not load CSS for me, but this does:

http://www.amazon.com/Intelligence-Jeff-Hawkins/dp/B000GQLCV...

kinofcain
Thanks, had stripped the tracking codes off the mobile link, but it doesn't load right on desktop.
lomendil
I just saw Jeff Hawkins give a talk and it was quite interesting. I was a bit worried, however, that he is basing his theory of intelligence on the human neocortex, while claiming to go after general principles.

This is guaranteed not to be terribly general, considering the many bits of matter on this planet that exhibit intelligence without a neocortex. By many, I mean ones that hugely outnumber humans.

So very interesting stuff, but not the answer that I think he wants it to be.

Lambdanaut
In "On Intelligence" he postulates that he's not necessarily going for a "human-like" intelligence or even a "life-like" one.

Basically he just wants something that's very good at recognizing patterns over time, which I can imagine the neocortex would be great at.

Though, he also references the thalamus and hippocampus in the books a lot, as very important parts of the brain to his framework. [http://en.wikipedia.org/wiki/Memory-prediction_framework#Neu...]

blhack
That book will change the way that you look at yourself.
thomaspaine
Grok (formerly Numenta) has a slightly more technical white paper that goes into more detail on the actual algorithms from On Intelligence:

https://www.groksolutions.com/htm-overview/education/HTM_Cor...

There are some subtle differences between HTMs and straight up deep learning, mainly the requirement for HTM data to be temporal and spatial.

I know Andrew used to sit on an advisory committee at Numenta, I don't know if he still does.

Yes, that's a closer description than what I recall. My understanding is based on a book I read several years ago: http://www.amazon.com/On-Intelligence-Jeff-Hawkins/dp/080507...
On Intelligence By Jeff Hawkins.

http://www.amazon.com/Intelligence-Jeff-Hawkins/dp/080507853...

Hawkins was the founder of both Palm and Handspring, but comes from an academic background where he studied neuropsychology (how the brain works). The book is a thorough look at how the brain experiences the world, with the goal being the creation of artificially intelligent machines.

It'll change the way you perceive perception (if that makes any sense).

I just finished listening to the audio book format of On Intelligence by Jeff Hawkins:

http://www.amazon.com/Intelligence-Jeff-Hawkins/dp/080507853...

(Hopefully I get this right... was pretty sleep deprived while listening to it.) In the book he describes the core part of intelligence as prediction - our brain is constantly making predictions from sensory inputs. While our brains do take in huge amounts of data, Hawkins' theory suggests that it is being operated on in massive parallelism - with different predictions happening simultaneously. Thus it isn't so much that all that data is being thrown away, as it is all being checked against known patterns from previous memories. He lays out reasoning for how intelligence, creativity, etc. are formed on top of this prediction model. It is a very interesting read.

Humans are superior to machines in several ways:

- we get tons of data, just not all textual. We have visual (~30fps in much bigger than HD resolution all day long), audio (again, better than CD quality all day long), smell, taste, and touch, not to mention internal senses (balance, pain, muscular feedback, etc). By the time a baby is 6 months old, she's seen and processed a lot of data. Don't know if it's more than Google's 18B pages, but it's a lot.

-we get correlated data. Google has to use a ton of pages for language because it only gets usage, not context. Much (most?) of the meaning in language comes from context, but using text you only get the context that's explicitly stated. Speech is so economical because humans get to factor in the speaker, the relationship with the speaker, body language, tone of voice, location, recent events, historical events, shared experiences, etc, etc, etc. Humans have a million ways to evaluate everything they read or hear, and without that, you need a ton of text to make sure you cover those situations.

-we have a mental model. Everything we do or learn adds to the model we have of the world, either by explicit facts (A can of Coke has 160 calories) or by relative frequencies (there are no purple cows but a lot of brown ones). My model of automobile engines is very crude and inaccurate while my model of programming is very good. Also, because I have (or can build) a model, I have a way to evaluate new data. Does this add anything to a part of my model (pg's essays did this for me)? Does it confirm a part of the model that wasn't sure (more experimental data)? Does it contradict a weakly held belief? Does is contradict a strongly held belief? Is it internally consistent? Is the source trustworthy?

This mental model might just be a bunch of statistically relevant correlations, but that sounds like neurons with positive or negative attractions of varying strength. Kind of like a brain. I believe Jeff Hawkins is on to something (see On Intelligence http://www.amazon.com/o/asin/0805078533/pchristensen-20), but there needs to be correlated data (like vision/hearing/touch are correlated) and the ability to evaluate data sources.

I agree that if humans can do it, machines can do it, but I think you're vastly underestimating the amount and quality of data humans get.

evgen
Don't want to be pedantic here, but your info on our visual bandwidth is a bit out of date. We actually only process about 10M/sec of visual data. Your brain does a very good job of fooling your conscious self, but what you are perceiving as HD-quality resolution is actually only gathered in the narrow cone of your current focal point. The rest of what you "see" is of much lower bandwidth and mostly a mental trick. We also don't store very much of this sensory data for later processing.
pchristensen
Yeah, I knew all that but my comment was already pretty long. Still, 10M/sec * every waking hour of life is still a lot of data.
fauigerzigerk
Yes I think you do have a point, but I don't think it's about things like visual resolution and the amount of data it generates. It may be about the much greater variety of data we see and about our ability to experiment and interact with the world around us in order to test our beliefs.

So maybe you could say it's about the quality of information not just the amount of data of one particular kind.

In any event, this is a debate that is only at the very beginning. I don't claim to have come to a conclusion. I just think those brute force statistical techniques are not the end of the road but rather a practical workaround for the brittleness and the complexity of traditional rule based systems.

HN Books is an independent project and is not operated by Y Combinator or Amazon.com.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.