Hacker News Comments on
Geoffrey Hinton: "Introduction to Deep Learning & Deep Belief Nets"
Institute for Pure & Applied Mathematics (IPAM)
·
Youtube
·
61
HN points
·
0
HN comments
- This course is unranked · view top recommended courses
Hacker News Stories and Comments
All the comments and stories posted to Hacker News that reference this video.⬐ bradneubergI completed this course over the last year. It's fantastic and from one of the founders of the field.I'd stick to the first half to get a good sense of back propagation and working with standard neural nets. I'd hold off on the second half, which delve more into Restricted Boltzmann Machines (RBMs) or autoencoders; these aren't used as much anymore.
To augment your education for things that have happened since 2012, I'd learn about ReLUs rather than sigmoids for activation values, as well as studying up on convolutional neural networks (CNNs) and the recent work in sequence-to-sequence NLP translation via neural networks.
⬐ NoneNone⬐ mindcrime⬐ hoaphumanoidYeah, there's some great material out there. Almost too much! It's all a bit overwhelming sometimes. And video, while great in many ways, is frustrating sometimes because you have to consume it in (more or less) real time. And at least for me, I can read a lot faster than the typical speech/listen loop, so watching videos feels too slow.Speeding up the video helps though. For anybody who hasn't discovered this trick yet, Youtube lets you speed up the playback to 1.25x, 1.5x or 2x the original speed. Doing this can really help you save time getting through stuff like this.
His coursera lectures are awesome⬐ king_magicThis looks fantastic. Exactly the kind of intro for deep learning I've been looking for.⬐ rudyl313Unfortunately this talk is kind of dated already. Most people don't stack RBMs or autoencoders to pretrain the weights anymore. If you use dropout with rectified linear units, you don't have to pretrain, even for large architectures.⬐ bradneubergIt's not just ReLUs that have helped, its also better random initialization before starting training, such as using Xavier initialization (http://andyljones.tumblr.com/post/110998971763/an-explanatio...)Also, batch normalization helps with convergence as well. In addition, LSTMs work when dealing with recurrent neural nets.