HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
TTIC Distinguished Lecture Series - Geoffrey Hinton

TTIC · Youtube · 65 HN points · 2 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention TTIC's video "TTIC Distinguished Lecture Series - Geoffrey Hinton".
Youtube Summary
Title: Dark Knowledge

Abstract: A simple way to improve classification performance is to average the predictions of a large ensemble of different classifiers. This is great for winning competitions but requires too much computation at test time for practical applications such as speech recognition. In a widely ignored paper in 2006, Caruana and his collaborators showed that the knowledge in the ensemble could be transferred to a single, efficient model by training the single model to mimic the log probabilities of the ensemble average. This technique works because most of the knowledge in the learned ensemble is in the relative probabilities of extremely improbable wrong answers. For example, the ensemble may give a BMW a probability of one in a billion of being a garbage truck but this is still far greater (in the log domain) than its probability of being a carrot. This "dark knowledge", which is practically invisible in the class probabilities, defines a similarity metric over the classes that makes it much easier to learn a good classifier. I will describe a new variation of this technique called "distillation" and will show some surprising examples in which good classifiers over all of the classes can be learned from data in which some of the classes are entirely absent, provided the targets come from an ensemble that has been trained on all of the classes. I will also show how this technique can be used to improve a state-of-the-art acoustic model and will discuss its application to learning large sets of specialist models without overfitting. This is joint work with Oriol Vinyals and Jeff Dean.

Bio: Geoffrey Hinton received his BA in experimental psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He did postdoctoral work at Sussex University and the University of California San Diego and spent five years as a faculty member in the Computer Science department at Carnegie-Mellon University. He then became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto. He spent three years from 1998 until 2001 setting up the Gatsby Computational Neuroscience Unit at University College London and then returned to the University of Toronto where he is a University Professor. He is the director of the program on "Neural Computation and Adaptive Perception" which is funded by the Canadian Institute for Advanced Research.

Geoffrey Hinton is a fellow of the Royal Society, the Royal Society of Canada, and the Association for the Advancement of Artificial Intelligence. He is an honorary foreign member of the American Academy of Arts and Sciences, and a former president of the Cognitive Science Society. He has received honorary doctorates from the University of Edinburgh and the University of Sussex. He was awarded the first David E. Rumelhart prize (2001), the IJCAI award for research excellence (2005), the IEEE Neural Network Pioneer award (1998), the ITAC/NSERC award for contributions to information technology (1992) the Killam prize for Engineering (2012) and the NSERC Herzberg Gold Medal (2010) which is Canada's top award in Science and Engineering.

Geoffrey Hinton designs machine learning algorithms. His aim is to discover a learning procedure that is efficient at finding complex structure in large, high-dimensional datasets and to show that this is how the brain learns to see. He was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning, products of experts and deep belief nets. His current main interest is in unsupervised learning procedures for multi-layer neural networks with rich sensory input.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Aug 08, 2015 · 2 points, 0 comments · submitted by bra-ket
Geoffrey Hinton gave results to this effect in a talk about "Dark Knowledge" [1]. Haven't seen any of these results published, though. I think he mentions something in the talk about NIPS rejecting the paper.

[1] - https://www.youtube.com/watch?v=EK61htlw8hY

Nov 08, 2014 · 63 points, 8 comments · submitted by etiam
robert_tweed
I wish people would properly edit videos like this before uploading them. The complete inaudibility of the host up to 3:14 is rather disconcerting. If you can't fix the sound, cut it!

Here's a link to the point where Geoffrey takes over, which is audible, but you'll probably need to crank up your volume:

https://www.youtube.com/watch?v=EK61htlw8hY&t=3m14s

Fortunately there doesn't seem to be any problem with background noise, but I can still barely hear it on my MBP with the volume at max. Headphones will most likely help.

ynniv
Audio Hijack Pro is very helpful in these cases. Once you've hijacked the browser audio you can add effects to normalize levels (AudioUnit Effect/Apple/AUMultibandCompressor), remove buzzing (AUFilter), or fix poor balance (4FX Effect/Channel Tweaker or Monomizer).
mitchty
Can you do that live and passthrough the audio?
ynniv
Sorry for the late reply: yes, it is realtime. You may need to restart the target process, but it will ask to do this automatically.
JabavuAdams
Fantastic insights, as usual. Definitely worth putting up with the annoying audio.
dhammack
FYI - Hinton is doing an AMA on r/machinelearning on Monday!
gone35
Here are the slides he's using (I think):

http://www.ttic.edu/dl/dark14.pdf

And here is Caruana's 2006 "Model Compression" paper he mentions:

http://dl.acm.org/citation.cfm?id=1150464

(Note the actual citation is Buciluǎ, Caruana, and Niculescu-Mizil 2006.)

mturmon
Those are indeed the slides from the talk (or very close). Hinton gave this talk at Caltech last month. Very cool.
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.