Hacker News Comments on
GPU Technology Conference 2015 day 3: What's Next in Deep Learning
Tech Events
·
Youtube
·
76
HN points
·
3
HN comments
- This course is unranked · view top recommended courses
Hacker News Stories and Comments
All the comments and stories posted to Hacker News that reference this video.⬐ richardkellerSeeing this type of research really inspires me to get involved in deep learning, at least to the point where I can accomplish some sort of basic task with it. As somebody with an undergraduate computer science degree, but only a very basic theoretical understanding of AI, what resources are available for me to understand deep learning and build something useful?⬐ yodsanklai⬐ IshKebabAndrew Ng's course on Coursera?⬐ davidbarkerI signed up for this last night. Really looking forward to it.⬐ unsignedintHis machine learning course is amazing! I really enjoy lectures and I feel like I'm learning a lot. So far I have better understanding of some statistical concept way better than I ever have in the past. (even only third week into the course!)That was really interesting, especially their audio->letters approach to speech recognition.I also agree with him that phonemes aren't really a real thing - in the same way that species aren't. It definitely makes more sense to have the neural network learn its own representation of sounds, rather than prescribe a representation made up by linguists.
I mean, the phoneme model is obviously pretty close to reality - Google use it - but neural networks can clearly learn a closer representation.
Anyway, really impressive results!
⬐ nshm⬐ NoneIt is not necessary to use phonemes in the model, however, it greatly restricts model's ability to learn, in particular to learn many special cases like rare foreign words. You need much more data to train the model and basically you lose ability to quickly teach the system a new word by specifying word pronunciation. It might sound great for advertising of computing power factories like Baidu is doing, but speech recognition experts are not that enthusiastic about such approach.It is pretty disappointing that modern machine learning "experts" advertise their improved results with questionable comparisons and do not care about machine learning theory, in particular, never consider model ability to generalize, model robustness to noises and related things which must be primary subjects for research.
None⬐ sherjilozairWell, what I really wanted to post was his quote on "Fear of AI", not really the whole video, which is https://youtu.be/qP9TOX8T-kI?t=3836⬐ datashovelHe may not have meant to imply this, but I think he's wrong on at least one of the points.He says there's no way to work productively on the problem. And perhaps from one perspective he's right, there is no imminent threat of overpopulation on Mars, and so probably no one is going to pay him a salary to work on it, but in general I think preventative maintenance is more important than dealing with fires when they actually happen.
If we know "overpopulation" is a problem in modern civilization, why not work on defining measures to prevent it in the future instead of letting history repeat itself?
And as far as AI apocalypse is concerned, I'm not concerned from one perspective but somewhat concerned from another. I think we don't need to be worried that AI will become sentient in the near future. I think we do need to be concerned about humans putting too much trust in AI to solve / automate critical problems for us. As a perfect example, I wouldn't recommend putting the decision of launching a nuclear weapon in the hands of some AI system. Though the AI wouldn't be evil in its intention to launch nuclear bombs inadvertently, the inadequacies of its ability to understand when it would be appropriate to launch nuclear bombs would certainly cause a lot of problems the world would need to deal with in the aftermath (if anyone were to survive).
⬐ amsilprotagHis position seems to be similar to MIT engineer David Mindell's, who recently wrote Our Robots, Ourselves[1]. It seems to me that the more hours one spends solving real problems with AI, the less likely they are to believe an apocalypse is imminent. Conversely, the more dependent a prognosticator is on book or film sales, the more likely they are to predict or depict a near-term AI apocalypse.[1] Google Talk: https://www.youtube.com/watch?v=4nDdqGUMdAY
⬐ samscullyI think the concern is less that an AI apocalypse is imminent and more that if it does happen, we won't see it coming and it will happen so quickly we won't be able to do anything to stop it. So it's important to try to avoid it ahead of time.⬐ IshKebabProbably because they are the only ones that see the hundreds of hours of work that go into making task-specific machine learning tools.We still are very far from a system that can really learn in a general way, but I guess it might not look like that to people that only see the result.
While we're at it maybe we should address the possibility of overpopulation on Mars?Andrew Ng thinks people are wasting their time with evil AI:
⬐ patrickaljordI think they're fully aware of this. It's just that Google (and in to a lesser extent Facebook) are so ridiculously ahead of everyone else when it comes to AI that all the competition can do in the meantime is brand AI as a dangerous evil in the near future (like Musk does) or bad for privacy (Apple). No doubt that when they catch up with Google and Facebook, these dangers of AI will be conveniently forgotten.⬐ DanBC> While we're at it maybe we should address the possibility of overpopulation on Mars?We've already fucked this planet so I sincerely hope a few people are thinking of ways to avoid fucking another one.
⬐ function_seven⬐ astrofinchI can't imagine we could make Mars more inhospitable than it already is. And, whatever technologies we'd have to develop to live on that planet, would forever be in our toolkit to reverse the damage we've done here, and prevent future damage there.Musk calls that a "radically inaccurate analogy": http://lukemuehlhauser.com/musk-and-gates-on-superintelligen...AI luminary Stuart Russell also takes on this analogy in this presentation: https://www.cs.berkeley.edu/~russell/talks/russell-ijcai15-f...
>OK, let’s continue [the overpopulation on Mars] analogy:
>Major governments and corporations are spending billions of dollars to move all of humanity to Mars [analogous to the billions that are being spent on AI]
>They haven’t thought about what we will eat and breathe when the plan succeeds
>If only we had worried about global warming in the late 19th C.
non-flash version of the video