HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
GPU Technology Conference 2015 day 3: What's Next in Deep Learning

Tech Events · Youtube · 76 HN points · 3 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Tech Events's video "GPU Technology Conference 2015 day 3: What's Next in Deep Learning".
Youtube Summary
GPU Technology Conference 2015 day 1: https://youtu.be/bMfHNyR0KJc
GPU Technology Conference 2015 day 2: https://youtu.be/ENZoY4mLgDE
GPU Technology Conference 2015 day 3: https://youtu.be/qP9TOX8T-kI

GPU Technology Conference (GTC) is the largest and most important event of the year for GPU developers and the entire ecosystem.

Large-Scale Deep Learning Featuring by Andrew Ng, Chief Scientist, Baidu.

Don't forget to like, comment, and subscribe! Thank you.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Dec 12, 2015 · 76 points, 11 comments · submitted by sherjilozair
richardkeller
Seeing this type of research really inspires me to get involved in deep learning, at least to the point where I can accomplish some sort of basic task with it. As somebody with an undergraduate computer science degree, but only a very basic theoretical understanding of AI, what resources are available for me to understand deep learning and build something useful?
yodsanklai
Andrew Ng's course on Coursera?
davidbarker
I signed up for this last night. Really looking forward to it.

https://www.coursera.org/learn/machine-learning

unsignedint
His machine learning course is amazing! I really enjoy lectures and I feel like I'm learning a lot. So far I have better understanding of some statistical concept way better than I ever have in the past. (even only third week into the course!)
IshKebab
That was really interesting, especially their audio->letters approach to speech recognition.

I also agree with him that phonemes aren't really a real thing - in the same way that species aren't. It definitely makes more sense to have the neural network learn its own representation of sounds, rather than prescribe a representation made up by linguists.

I mean, the phoneme model is obviously pretty close to reality - Google use it - but neural networks can clearly learn a closer representation.

Anyway, really impressive results!

nshm
It is not necessary to use phonemes in the model, however, it greatly restricts model's ability to learn, in particular to learn many special cases like rare foreign words. You need much more data to train the model and basically you lose ability to quickly teach the system a new word by specifying word pronunciation. It might sound great for advertising of computing power factories like Baidu is doing, but speech recognition experts are not that enthusiastic about such approach.

It is pretty disappointing that modern machine learning "experts" advertise their improved results with questionable comparisons and do not care about machine learning theory, in particular, never consider model ability to generalize, model robustness to noises and related things which must be primary subjects for research.

None
None
sherjilozair
Well, what I really wanted to post was his quote on "Fear of AI", not really the whole video, which is https://youtu.be/qP9TOX8T-kI?t=3836
datashovel
He may not have meant to imply this, but I think he's wrong on at least one of the points.

He says there's no way to work productively on the problem. And perhaps from one perspective he's right, there is no imminent threat of overpopulation on Mars, and so probably no one is going to pay him a salary to work on it, but in general I think preventative maintenance is more important than dealing with fires when they actually happen.

If we know "overpopulation" is a problem in modern civilization, why not work on defining measures to prevent it in the future instead of letting history repeat itself?

And as far as AI apocalypse is concerned, I'm not concerned from one perspective but somewhat concerned from another. I think we don't need to be worried that AI will become sentient in the near future. I think we do need to be concerned about humans putting too much trust in AI to solve / automate critical problems for us. As a perfect example, I wouldn't recommend putting the decision of launching a nuclear weapon in the hands of some AI system. Though the AI wouldn't be evil in its intention to launch nuclear bombs inadvertently, the inadequacies of its ability to understand when it would be appropriate to launch nuclear bombs would certainly cause a lot of problems the world would need to deal with in the aftermath (if anyone were to survive).

amsilprotag
His position seems to be similar to MIT engineer David Mindell's, who recently wrote Our Robots, Ourselves[1]. It seems to me that the more hours one spends solving real problems with AI, the less likely they are to believe an apocalypse is imminent. Conversely, the more dependent a prognosticator is on book or film sales, the more likely they are to predict or depict a near-term AI apocalypse.

[1] Google Talk: https://www.youtube.com/watch?v=4nDdqGUMdAY

samscully
I think the concern is less that an AI apocalypse is imminent and more that if it does happen, we won't see it coming and it will happen so quickly we won't be able to do anything to stop it. So it's important to try to avoid it ahead of time.
IshKebab
Probably because they are the only ones that see the hundreds of hours of work that go into making task-specific machine learning tools.

We still are very far from a system that can really learn in a general way, but I guess it might not look like that to people that only see the result.

While we're at it maybe we should address the possibility of overpopulation on Mars?

Andrew Ng thinks people are wasting their time with evil AI:

https://youtu.be/qP9TOX8T-kI?t=1h2m45s

patrickaljord
I think they're fully aware of this. It's just that Google (and in to a lesser extent Facebook) are so ridiculously ahead of everyone else when it comes to AI that all the competition can do in the meantime is brand AI as a dangerous evil in the near future (like Musk does) or bad for privacy (Apple). No doubt that when they catch up with Google and Facebook, these dangers of AI will be conveniently forgotten.
DanBC
> While we're at it maybe we should address the possibility of overpopulation on Mars?

We've already fucked this planet so I sincerely hope a few people are thinking of ways to avoid fucking another one.

function_seven
I can't imagine we could make Mars more inhospitable than it already is. And, whatever technologies we'd have to develop to live on that planet, would forever be in our toolkit to reverse the damage we've done here, and prevent future damage there.
astrofinch
Musk calls that a "radically inaccurate analogy": http://lukemuehlhauser.com/musk-and-gates-on-superintelligen...

AI luminary Stuart Russell also takes on this analogy in this presentation: https://www.cs.berkeley.edu/~russell/talks/russell-ijcai15-f...

>OK, let’s continue [the overpopulation on Mars] analogy:

>Major governments and corporations are spending billions of dollars to move all of humanity to Mars [analogous to the billions that are being spent on AI]

>They haven’t thought about what we will eat and breathe when the plan succeeds

>If only we had worried about global warming in the late 19th C.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.