HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
11. Mind vs. Brain: Confessions of a Defector

MIT OpenCourseWare · Youtube · 32 HN points · 3 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention MIT OpenCourseWare's video "11. Mind vs. Brain: Confessions of a Defector".
Youtube Summary
MIT 6.868J The Society of Mind, Fall 2011
View the complete course: http://ocw.mit.edu/6-868JF11
Instructor: Marvin Minsky

In this lecture, students discuss Chapter 8 from The Emotion Machine, covering what "genius" is, and how it is distinguished from everyday thinking, as well as ways to learn from mistakes.

License: Creative Commons BY-NC-SA
More information at http://ocw.mit.edu/terms
More courses at http://ocw.mit.edu
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
I first learned about C. elegans neuron-mapping project from this Society of Mind video:

https://www.youtube.com/watch?v=6Px0livk6m8

My immediate interest was in seeing the differences and similarities between real and simulated worm. I haven't spent much time searching for resulting papers, but it's been 7 years since then and I'm not aware of any ground-breaking publications on the subject.

Unless I'm missing something massive, describing this paper as training the worm to "balance a pole at the tip of its tail" is highly misleading.

In this paper researchers use an external algorithm to tweak parameters of a part of worm's neural model until that part can perform a certain task. The neural circuit effectively serves as a controller for a mechanism that has nothing to do with the original worm. The task, the setup, the subset of the model and the training algorithms are all chosen by the researchers.

nonbel
I think their idea is to use biologically inspired network architectures. Looking at Fig1 of the paper[1], it seems they have drawn the schematic in an overly complicated way...

For example, FWD and REV motor neurons are totally determined by AVB and AVA sensory neurons so can be left out. I would bet if it is worked out this reduces to some simple ANN architecture.

[1] https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsd...

gambler
>I think their idea is to use biologically inspired network architectures.

It seems that way. This approach could have interesting applications in engineering. But I wish it wasn't pitched to imply that "machine learning and the activity of our brain the same on a fundamental level". (This is a direct quote from the article, except it was posed as a rhetorical question there.)

mr_toad
From the summary, they’re not altering the topology of the network at all, just the connection strength between neurons.

While this isn’t the way a natural neural network would learn (I’m not sure a nematode can learn), it’s still interesting that you can take a copy of a natural neural network and force it to learn in this way.

lurquer
I would expect there is an enormous set of random topologies that could be trained to balance a pole. Indeed, part of the elegant 'magic' of neural nets is that the topology is fairly irrelevant... the number of layers, the number of nodes, the manner in which they are connected... pretty much any configuration can get you into a mid 90% accuracy on MNIST (and emulating a basic PID algorithm is simpler than MNIST). Of course, I'm referring to basic tasks; clearly topologies matter a great deal with more sophisticated things.
Dec 30, 2015 · 32 points, 2 comments · submitted by espeed
p1esk
Didn't he abandon the whole idea of playing with neuroscience, dropped out again, and went on to work at Twitter?
justifier
https://youtu.be/6Px0livk6m8?t=245

i found this to be articulated perfectly..

the notion that our current abilities in regard to interacting with complex networks: ~monte carlo; is analogous with copernicus's abilities to interact with planetary trajectory without calculus

i only have a tangential familiarity with scale free, but as i understand it scale free is more of the same in regard to interactions with networks

but i do agree that wholly understanding network trajectories will have a significant effect on our understanding of cognition, neuroscience, ai, and some yet undiscussed consequence

https://youtu.be/6Px0livk6m8?t=4551

von nuemann anectdote and quote..

as remembered:

    however, if the brain uses any sort of mathematics,
    the language of that mathematics must certainly be 
    different from that which we explicitly
    and consciously refer to by that name today
from the book(o):

    However, the above remarks about reliability and 
    logical and arithmetical depth prove that whatever 
    the system is, it cannot fail to differ considerably 
    from what we consciously and explicitly consider 
    as mathematics.
.

it took me some time and effort to find the lecturer's name: David Dalrymple; so i'll link it here(i)

(o) http://www.amazon.com/The-Computer-Brain-Silliman-Memorial/d...

(i) https://en.wikipedia.org/wiki/David_Dalrymple_(computer_scie...

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.