HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Geoffrey Hinton: "Introduction to Deep Learning & Deep Belief Nets"

Institute for Pure & Applied Mathematics (IPAM) · Youtube · 61 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Institute for Pure & Applied Mathematics (IPAM)'s video "Geoffrey Hinton: "Introduction to Deep Learning & Deep Belief Nets"".
Youtube Summary
Graduate Summer School 2012: Deep Learning, Feature Learning

"Part 1: Introduction to Deep Learning & Deep Belief Nets"
Geoffrey Hinton, University of Toronto

Institute for Pure and Applied Mathematics, UCLA
July 9, 2012

For more information: https://www.ipam.ucla.edu/programs/summer-schools/graduate-summer-school-deep-learning-feature-learning/?tab=overview
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Dec 14, 2015 · 61 points, 6 comments · submitted by mindcrime
bradneuberg
I completed this course over the last year. It's fantastic and from one of the founders of the field.

I'd stick to the first half to get a good sense of back propagation and working with standard neural nets. I'd hold off on the second half, which delve more into Restricted Boltzmann Machines (RBMs) or autoencoders; these aren't used as much anymore.

To augment your education for things that have happened since 2012, I'd learn about ReLUs rather than sigmoids for activation values, as well as studying up on convolutional neural networks (CNNs) and the recent work in sequence-to-sequence NLP translation via neural networks.

None
None
mindcrime
Yeah, there's some great material out there. Almost too much! It's all a bit overwhelming sometimes. And video, while great in many ways, is frustrating sometimes because you have to consume it in (more or less) real time. And at least for me, I can read a lot faster than the typical speech/listen loop, so watching videos feels too slow.

Speeding up the video helps though. For anybody who hasn't discovered this trick yet, Youtube lets you speed up the playback to 1.25x, 1.5x or 2x the original speed. Doing this can really help you save time getting through stuff like this.

hoaphumanoid
His coursera lectures are awesome
king_magic
This looks fantastic. Exactly the kind of intro for deep learning I've been looking for.
rudyl313
Unfortunately this talk is kind of dated already. Most people don't stack RBMs or autoencoders to pretrain the weights anymore. If you use dropout with rectified linear units, you don't have to pretrain, even for large architectures.
bradneuberg
It's not just ReLUs that have helped, its also better random initialization before starting training, such as using Xavier initialization (http://andyljones.tumblr.com/post/110998971763/an-explanatio...)

Also, batch normalization helps with convergence as well. In addition, LSTMs work when dealing with recurrent neural nets.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.