HN Books @HNBooksMonth

The best books of Hacker News.

Hacker News Comments on
Brain Computations: What and How

Edmund T. Rolls · 5 HN comments
HN Books has aggregated all Hacker News stories and comments that mention "Brain Computations: What and How" by Edmund T. Rolls.
View on Amazon [↗]
HN Books may receive an affiliate commission when you make purchases on sites after clicking through links on this page.
Amazon Summary
In order to understand how the brain works, it is essential to know what is computed by different brain systems, and how those computations are performed. Brain Computations: What and How elucidates what is computed in different brain systems and describes current computational approaches and models of how each of these brain systems computes. This approach has enormous potential for helping us understand ourselves better in health. Potential applications of this understanding are to the treatment of the brain in disease, as well as to artificial intelligence, which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions. Pioneering in its approach, Brain Computations: What and How will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics.
HN Books Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this book.
If you want to understand how the brain works this is a good intro with some realistic neuronal network models ( spoiler: these have nothing to do with “artificial neural nets” as we know them) https://www.amazon.com/Brain-Computations-Edmund-T-Rolls/dp/...
> What area of machine learning do you feel is closer to how natural cognition works?

None. The prevalent ideas in ML are a) "training" a model via supervised learning b) optimizing model parameters via function minimization/backpropagation/delta rule.

There is no evidence for trial & error iterative optimization in natural cognition. If you'd try to map it to cognition research the closest thing would be behaviorist theories by B.F. Skinner from 1930s. These theories of 'reward and punishment' as a primary mechanism of learning have been long discredited in cognitive psychology. It's a black-box, backwards looking view disregarding the complexity of the problem (the most thorough and influential critique of this approach was by Chomsky back in the 50s)

The ANN model that goes back to Mcculloch & Pitts paper is based on neurophysiological evidence available in 1943. The ML community largely ignores fundamental neuroscience findings discovered since (for a good overview see https://www.amazon.com/Brain-Computations-Edmund-T-Rolls/dp/... )

I don't know if it has to do with arrogance or ignorance (or both) but the way "AI" is currently developed is by inventing arbitrary model contraptions with complete disregard for constraints and inner workings of living intelligent systems, basically throwing things at the wall until something sticks, instead of learning from nature, like say physics. Saying "but we don't know much about the brain" is just being lazy.

The best description of biological constraints from computer science perspective is in Leslie Valiant work on "neuroidal model" and his book "circuits of the mind" (He is also the author of PAC learning theory influential in ML theorist circles) https://web.stanford.edu/class/cs379c/archive/2012/suggested... , https://www.amazon.com/Circuits-Mind-Leslie-G-Valiant/dp/019...

If you're really interested in intelligence I'd suggest starting with representation of time and space in the hippocampus via place cells, grid cells and time cells, which form sort of a coordinate system for navigation, in both real and abstract/conceptual spaces. This likely will have the same importance for actual AI as Cartesian coordinate system in other hard sciences. See https://www.biorxiv.org/content/10.1101/2021.02.25.432776v1 and https://www.sciencedirect.com/science/article/abs/pii/S00068...

Also see research on temporal synchronization via "phase precession", as a hint on how lower level computational primitives work in the brain https://www.sciencedirect.com/science/article/abs/pii/S00928...

And generally look into memory research in cogsci and neuro, learning & memory are highly intertwined in natural cognition and you can't really talk about learning before understanding lower level memory organization, formation and representational "data structures". Here are a few good memory labs to seed your firehose

https://twitter.com/MemoryLab

https://twitter.com/WiringTheBrain

https://twitter.com/TexasMemory

https://twitter.com/ptoncompmemlab

https://twitter.com/doellerlab

https://twitter.com/behrenstimb

https://twitter.com/neurojosh

https://twitter.com/MillerLabMIT

KKKKkkkk1
> I don't know if it has to do with arrogance or ignorance (or both) but the way "AI" is currently developed is by inventing arbitrary model contraptions

Deep learning is incredibly successful in solving certain real-world problems such as detecting and recognizing faces in photos, transcribing speech, and translating text. It's true that some trolls claim that gradient descent is how the brain works [1]. But if you open almost any machine learning textbook, you'll see on one of the first pages an acknowledgement that the methods do not agree with modern neuroscience (while still being incredibly useful).

[1] https://twitter.com/ylecun/status/1202013026272063488

sillysaurusx
For what it’s worth, I agree with this take. But I think RL isn’t completely orthogonal to the ideas here.

The missing component is memory. Once models have memory at runtime — once we get rid of the training/inference separation - they’ll be much more useful.

andyxor
not sure about RL, but ANN even in their current brute force form, can be used as pre-processing/dimensionality reduction/autoencoder layer in content-addressable memory model, such as SDM by Kanerva, which does have some biological plausibility https://en.wikipedia.org/wiki/Sparse_distributed_memory

Also, the 'neocognitron' by Fukushima, which is the basis of CNNs, was inspired by actual neuroscience findings from visual cortex in the 70s (and speculatively, may be that's why it works so well in computer vision). So deep learning might have some complementary value as a representation of lower level sensory processing modules e.g. in V1, what's missing is a computational model of hippocampus and "the rest of the f..g owl"

bobberkarl
just to say this is the kind of answer that makes HN an oasis on the internet.
unishark
The place/grid/etc cells fall generally under the topic of cognitive mapping. And people have certainly tried to use it in A.I. over the decades, including recently when the neuroscience won the Nobel prize. But in the niches where it's an obvious thing to try, if you can't even beat ancient ideas like Kalman and particle filters, people give up and move on. Jobs where you make models that don't do better at anything except to show interesting behavior are computational neuroscience jobs, not machine learning, and are probably just as rare as any other theoretical science research position.

There is a niche of people trying to combine cognitive mapping with RL, or indeed arguing that old RL methods are actually implemented in the brain. But it looks like they don't much benefit to show in applications for it. They seem to have no shortage of labor or collaborators at their disposal to attempt and test models. It certainly must be immensely simpler than rat experiments.

Having said that, yes I do believe that progress can come considering how nature accomplish the solution and what major components we are still missing. But common-sense-driven tacking them on there has certainly been tried.

Brain Computations, What and How, by Edmund T. Rolls https://www.amazon.com/Brain-Computations-Edmund-T-Rolls/dp/...
there is no evidence of back-propagation in the brain.

See Professor Edmund T. Rolls books on biologically plausible neural networks:

"Brain Computations: What and How" (2020) https://www.amazon.com/gp/product/0198871104

"Cerebral Cortex: Principles of Operation" (2018) https://www.oxcns.org/b12text.html

"Neural Networks and Brain Function" (1997) https://www.oxcns.org/b3_text.html

ShamelessC
"There is just one problem: [biological neural networks] are physically incapable of running the backpropagation algorithm."

From the linked article.

blueyes
I read that sentence. The article is not the only source of truth in brain function, and its author may be too certain about the brain. In any case, there will always be dissimilarities between biological neurons and computations on silicon, which probably shouldn't be called neurons, in order to avoid confusion.
ShamelessC
I agree. Really don't appreciate the level at which researchers are willing to make these comparisons right now. They're moving fast and publishing things.

It's probably far too late to change the name for computational neural nets - but I agree. Something like a "differentiable learning graph" or something would be better.

HN Books is an independent project and is not operated by Y Combinator or Amazon.com.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.