HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Deep Learning State of the Art (2019) - MIT

Lex Fridman · Youtube · 56 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Lex Fridman's video "Deep Learning State of the Art (2019) - MIT".
Youtube Summary
New lecture on recent developments in deep learning that are defining the state of the art in our field (algorithms, applications, and tools). This is not a complete list, but hopefully includes a good sampling of new exciting ideas. For more lecture videos visit our website or follow code tutorials on our GitHub repo.

INFO:
Website: https://deeplearning.mit.edu
GitHub: https://github.com/lexfridman/mit-deep-learning
Slides: http://bit.ly/2HiZyvP
Playlist: http://bit.ly/deep-learning-playlist

OUTLINE:
0:00 - Introduction
2:00 - BERT and Natural Language Processing
14:00 - Tesla Autopilot Hardware v2+: Neural Networks at Scale
16:25 - AdaNet: AutoML with Ensembles
18:32 - AutoAugment: Deep RL Data Augmentation
22:53 - Training Deep Networks with Synthetic Data
24:37 - Segmentation Annotation with Polygon-RNN++
26:39 - DAWNBench: Training Fast and Cheap
29:06 - BigGAN: State of the Art in Image Synthesis
30:14 - Video-to-Video Synthesis
32:12 - Semantic Segmentation
36:03 - AlphaZero & OpenAI Five
43:34 - Deep Learning Frameworks
44:40 - 2019 and beyond

CONNECT:
- If you enjoyed this video, please subscribe to this channel.
- Twitter: https://twitter.com/lexfridman
- LinkedIn: https://www.linkedin.com/in/lexfridman
- Facebook: https://www.facebook.com/lexfridman
- Instagram: https://www.instagram.com/lexfridman
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Jan 18, 2019 · 50 points, 5 comments · submitted by ArtWomb
tomahunt
The main sections of the talk without note are:

- Bert and Natural Language processing, - Tesla Autopilot Hardware v2+: NN at scale, - AdaNet: AutoML with Ensembles, - AutoAugmentation, - Training Deep Networks with Synthetic data, - Segmentation Annotation with Polygon-RNN++, - DAWNBench: Training fast and cheap, - Big GAN: state of the art in image sythesis, - Video to Video Synthesis, - Semantic segmentation, - AlphaZero and openAI Five, - Deep learning frameworks

sounds
Was good to hear Lex Fridman's take on where we're at.

Honest question: how have people's experience with OpenAI Five been so far? I haven't had the time to check it out in detail so I'm paying close attention to what others are saying.

visarga
TL;DW: BERT and BigGAN
cs702
Nice work. I can think of only two things that are missing:

* Normalizing flows - e.g., https://arxiv.org/abs/1605.08803 , https://arxiv.org/abs/1807.03039 , among many others

* ODEnets and continous normalizing flows - https://arxiv.org/abs/1806.07366

grej
Yeah strange that ODEnets were left off, and I’m glad you mentioned it. That has the opportunity to be a transformative approach to more efficient training and much better performance on time-series problems.
Jan 17, 2019 · 6 points, 0 comments · submitted by AlanTuring
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.