HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
AI, Deep Learning, and Machine Learning: A Primer

a16z.com · 288 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention a16z.com's video "AI, Deep Learning, and Machine Learning: A Primer".
Watch on a16z.com [↗]
a16z.com Summary
watch time: 45 minutes “One person, in a literal garage, building a self-driving car.” That happened in 2015. Now to put that fact in context, compare this to 2004, when DARPA sponsored the very first driverless car Grand Challenge. Of …
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Jun 11, 2016 · 288 points, 77 comments · submitted by jonbaer
cs702
This is an opinionated video that tries to rewrite history. For example, according to it, the "big breakthrough" with deep learning occurred in 2012 when Andrew Ng et al got an autoencoder to learn to categorize objects in unlabeled images. WHAT? Many other researchers were doing similar work years earlier. According to whom was this the "big breakthrough?"

The video at least mentions Yann LeCun's early work with convnets, but there's no mention of Hinton et al's work with RBMs and DBNs in 2006, or Bengio et al's work with autoencoders in 2006/2007, or Schmidhuber et al's invention of LSTM cells in 1997... I could keep going. The list of people whose work is insulted by omission is HUGE.

I stopped watching the video at that point.

withfries2
cs702, this deck started as a short talk so I didn't have time to acknowledge all the great work leading up to the Google YouTube experiment. Your list is good, and I'd add Rosenblatt for the first perceptron, Rumelhart & McClelland for applying techniques to perception, Werbos for backpropogation, Fukushima for convolutional networks, and so many more.

I found these helpful while researching the history: http://www.andreykurenkov.com/writing/a-brief-history-of-neu... http://www.scholarpedia.org/article/Deep_Learning

What else have you found particularly useful?

cs702
withfries2: Bengio's 2009 survey and Schmidhuber's "conspiracy" blog post contain useful, accurate historical background from the perspective of two leading researchers, with lots of links to additional sources:

http://www.iro.umontreal.ca/~bengioy/papers/ftml.pdf

http://people.idsia.ch/~juergen/deep-learning-conspiracy.htm...

Were I in your shoes, I would NOT have highlighted the Google YouTube experiment as "the" big breakthrough. It was just an interesting worthwhile experiment by one of many groups of talented AI researchers who have made slow progress over decades of hard work. Why single it out?

--

PS. The YouTube experiment did not produce new theory, and from a practical standpoint, it would be unfair to say that it reignited interest in deep learning. Consider that the paper currently has only ~800 citations, according to Google Scholar.[1] For comparison, Krizhevsky et al's paper describing the deep net that won Imagenet (trained on one computer with one GPU) has over 5000 citations.[2] And neither of these experiments deserves to be called "the" big breakthrough.

[1] https://scholar.google.com/citations?view_op=view_citation&h...

[2] https://scholar.google.com/citations?view_op=view_citation&h...

withfries2
Mostly, I wanted to highlight the importance of scale (data + compute) for the accuracy of deep networks.
cs702
I understand, but singling out that one experiment and its lead researcher as "the" big breakthrough was -- and is -- insulting to the hard work of a long list of others who go unmentioned. The worst part about this is that I can imagine, say, journalists with deadlines relying on your video as an authoritative source of historical information.
smhx
The video is almost cringe-worthy in it's factual inaccuracy around 2012.

The reigniting of deep learning around 2012 was because of Krizhevsky, Sutskever & Hinton winning the Imagenet challenge (1000 object classes)

Contrary to how much Google tried to sell Andrew Ng's "breakthrough" 2012 experiment with tons of PR, the paper is very weak, and cant be reproduced unless you do a healthy amount of hand-waving. For example, to get an unsupervised cat, you have to initialize your image close to a cat and do gradient descent wrt the input. Or else, you dont get a cat... It is not even considered a good paper, forget being breakthrough. Also, those 16000 CPU cores etc. can be reproduced with a few 2012-class GPUs and much smaller time-span than their training time.

The next slide after the 2012 breakthrough that shows the Javascript neural network -- contrary to what it looks like -- is not TensorFlow either. It has cleverly and conveniently been given TensorFlow branding, so most people just confuse it, but it's just a separate Javascript library akin to convnet.js

Since your page gets a ton of hits, it's at least worth it to publish a comment about these GLARING inaccuracies.

p1esk
Your criticism is unreasonable. The "cat face" 2012 paper is an excellent paper, and is a breakthrough.

1. They demonstrated a way to detect high level features with unsupervised learning, for the first time. That was the main stated goal of the paper, and they achieved it magnificently.

2. They devised a new type of an autoencoder, which achieved significantly higher accuracy than other methods.

3. They improved the state of the art for the 22k ImageNet classification by 70% (compare to 15% improvement for 1k ImageNet in the Krizhevsky's paper).

4. They managed to scale their model 100 times compared to the largest model of the time - not a trivial task.

You say "it can't be reproduced" and then "can be reproduced", in the same paragraph! :-)

Regarding initializing an input image close to a cat to "get a cat", I think you missed the point of that step - it was just an additional way to verify that the neuron is really detecting a cat. That step was completely optional. The main way to verify their achievement was the histogram showing how the neuron reacts to images with cats in them, and how it reacts to all other images. That histogram is the heart of the paper, not the artificially constructed image of a cat face.

smhx
The only objective response I'll give to this comment is the number of citations (as pointed out by another user) https://news.ycombinator.com/item?id=11885161 .

It's not perfect, but I cant give a reasonable answer to the other extreme of an opinion. fwiw, as a researcher I spent quite some time on this paper, but that subjective point doesn't mean anything to you.

p1esk
I've read hundreds of ML papers, and this one is better than 90%, maybe even 99%. It's one of the best papers published in 2012. The authors are some of best minds in the field. Statements like "It is not even considered a good paper", or "very weak" need some explanation, to say the least.

The only negative thing I can say about that paper is they have not open-sourced their code.

raverbashing
Yeah. Between LeCunn and the Google experiment a lot of things happened

Not to mention the use of ReLu and understanding of the Vanishing Gradient issue

pinouchon
Omitting the other members of the LBH conspiracy and Schmidhuber is unfortunate, but I agree with the idea that the number one reason deep learning is working now is scale. Hinton also says it himself, for example (at 5m45 in the video below): "What was wrong in the 80 was that we didn't have enough data and we didn't have enough compute power. [...] Those were the main things that were wrong".

He gives a "brief history of backpropagation" here: https://youtu.be/l2dVjADTEDU?t=4m35s

lars
I agree that scale is an important factor in deep learning's success, but that Google experiment ended up being a good example of how not to do it. They used 16000 CPU cores to get that cat detector. A short while later, a group at Baidu were able to replicate the same network with only 3 computers with 4 GPUs each. (The latter group was also lead by Andrew Ng.)
espadrine
Incidentally, seeing the speaker set up an overkill neural network for a trivial classification problem seemed off to me. Unsurprisingly, at least 75% of the neurons were unused.

Throwing a phenomenal amount of neurons at a problem is not a goal; using a minimal amount to solve it in a given time budget is.

The statement at the end of the video, “all the serious applications from here on out need to have deep learning and AI inside”, seems awfully misguided. Even DeepMind doesn't use deep learning for everything.

KasianFranks
AI is very fragmented. Biomimicry has always been the way forward in every industry and Stephen Pinker made good head way from my vantage.

https://www.google.com/webhp?sourceid=chrome-instant&ion=1&e...

https://www.google.com/webhp?sourceid=chrome-instant&ion=1&e...

Saira Mian, Micheal I. Jordan (Andrew Ng was a pupil of his) and David Blei were not mentioned in this video so they are off the mark a bit. Vector space is the place.

https://www.google.com/webhp?sourceid=chrome-instant&ion=1&e...

AI has become the most competitive academic and industry sector I've seen. Firms like Andreessen are trying to understand the impact during this AI summer and they should be applauded for this.

One of the keys to AI is found here: https://www.google.com/webhp?sourceid=chrome-instant&ion=1&e...

Deep learning has very little to do with how the brain and mind work together. In the video the highlight on ensemble (combinatorial) techniques are a big part of the solution.

gavanwoolery
More direct link to Split-brain as it applies to computing: http://en.wikipedia.org/wiki/Split-brain_(computing)

Also, thanks for that term! Was not aware of it, but very useful to describe what I think is one of the big problems.

None
None
KasianFranks
Yes, split brain approaches in general computing are very interesting and I think, over lap some approaches in ai-based computational combined with neuroscientific efforts.
_yosefk
"Biomimicry has always been the way forward in every industry"

Care to elaborate? Certainly classical computers don't look a whole lot like the information processing of any living being (I guess one could argue that they mimic what humans do with pencil and paper, but it seems a bit of a stretch.) To me other things like say human-made engines also don't look that similar to any life form, but then I think I know very little outside computing, relatively to say the average person here, and hence I'm genuinely curios about your remark and "wouldn't be surprised to hear something surprising."

KasianFranks
Non-linear DNA computing. In addition, getting systems to compute based on words - see: https://www.kaggle.com/c/word2vec-nlp-tutorial/forums/t/1234...

e.g. Austria - Capital = Vienna OR 12 x 12 = 144

We continue to mimic nature in our scientific endeavors and the brain, as the best pattern matcher we know of, is no exception.

gavanwoolery
Exactly. When anybody dares to tell me this is an unsolvable problem, I tell them that nature has already solved it. :) We just need to figure out what nature is doing and how to reproduce that with digital, analog, or biomechanical machinery (which is of course, no small task).
withfries2
Thanks for the links. I'm a huge fan of Stephen Pinker.

Most deep learning people I talk to acknowledge the algorithms & data structures are only lightly inspired by brain anatomy. Michael Jordan makes this point well in this IEEE article: http://spectrum.ieee.org/robotics/artificial-intelligence/ma...

Donna Dubinsky and Jeff Hawkins and the team at Numenta are doing the most explicit biomimicry work I'm aware of. What else is happening that you know of?

KasianFranks
There's a ton. Jeff and his team are on a very particular course and have been since 2004. There are some other things happening in the space and it has to do with collection of data and sensory inputs similar to 5 year old child. Send me an email at [email protected] to continue as I don't check back on these comments too often.

Stephen Pinker gave a somewhat unbiased analysis of CTM (Computational Theory of the Mind) a while ago. Very relevant.

gavanwoolery
Interesting to see the amount of "winters" AI has gone through (analogous, to a lesser extent, to VR).

I see increasing compute power, an increased learning set (the internet, etc), and increasingly refined algorithms all pouring into making the stuff we had decades ago more accurate and faster. But we still have nothing at all like human intelligence. We can solve little sub-problems pretty well though.

I theorize that we are solving problems slightly the wrong way. For example, we often focus on totally abstract input like a set of pixels, but in reality our brains have a more gestalt / semantic approach that handles higher-level concepts rather than series of very small inputs (although we do preprocess those inputs, i.e. rays of light, to produce higher level concepts). In other words, we try to map input to output at too granular of a level.

I wonder though if there will be a radical rethinking of AI algorithms at some point? I tend to always be of the view that "X is a solved problem / no room for improvement in X" is BS, no matter how many people have refined a field over any period of time. That might be "naive" with regards to AI, but history has often shown that impossible is not a fact, just a challenge. :)

RockyMcNuts
1) sounds like exactly what deep learning is...map more complex abstractions in each succeeding layer

2) are computers that can understand speech, recognize faces, drive cars, beat humans at Jeopardy really 'nothing at all like human intelligence?'

throwawaysocks
> 1) sounds like exactly what deep learning is...map more complex abstractions in each succeeding layer

Only at such a high level of abstraction as to be meaningless.

> 2) are computers that can understand speech, recognize faces, drive cars, beat humans at Jeopardy really 'nothing at all like human intelligence?'

They are not. Hundreds of man years worth of engineering time go into each of those systems, and none of those systems generalizes to anything other than the task it was created for. That's nothing like human intelligence.

eva1984
It beats human with accuracy. Which makes it more practical.

Computer doesn't need to be strong AI to replace human.

bobdole1234
We spend a lifetime building the skills that we use in our day to day lives.

And most of them don't transfer.

armitron
This is a very superficial point-of-view.

What matters here is the concept itself (deep learning as a generic technique) but also scalability. Not the specifics that we have today, but the specifics that we will have 20 years from now.

The concept is proven, all that matters now is time...

daveguy
> The concept is proven, all that matters now is time...

This is a very naive point of view. You could deep-learn with a billion times more processing power and a billion times more data for 20 years and it would not produce a general artificial intelligence. Deep learning is a set of neural network tweaks that is insufficient to produce AGI. Within 20 years we may have enough additional tweaks to make an AGI, but I doubt that algorithm will look anything like the deep learning we have today.

throwawaysocks
This is basically exactly what I was trying to say with my original comment; thanks for stating it in a clearer way.
jimfleming
> Only at such a high level of abstraction as to be meaningless.

I'm not sure what this means or how the abstractions are meaningless? From Gabor filters to concepts like "dog", the abstractions are quite meaningful (in that they function well), even if not to us.

> They are not. Hundreds of man years worth of engineering time go into each of those systems, and none of those systems generalizes to anything other than the task it was created for. That's nothing like human intelligence.

This isn't strictly true if we look at the ability to generalize as a sliding scale. The level of generalization has actually increased significantly from expert systems to machine learning to deep learning. We have not reached human levels of generalization but we are approaching.

Consider that DL can identify objects, people, animals in unique photos never seen before and that more generally the success of modern machine learning is it's ability generalize from training to test time rather than hand engineering for each new case. Newer work is even able to learn from just a few examples[0] and then generalize beyond that. Or the Atari work from DeepMind that can generalize to dozens/hundreds of games. None of those networks are created specifically for Break Out or Pong.

It's also not entirely fair to discount the hundreds of years of engineering considering most of these systems are trained from scratch (randomness). Humans, however, benefit from the preceding evolution which has a time scale that far exceeds any human engineering effort. :)

[0] https://arxiv.org/abs/1605.06065

gavanwoolery
1) I am aware of deep learning, and the improvements made circa 2012 or so, but it is still ultimately failing to make advanced correlations between differing training sets, a strong memory, and a meaningful distinction between high-level abstractions and low-level inputs (although all these things are being addressed in one way or another with incremental improvements). And also effectively sharing learned data among separate entities or reteaching it.

2) These things are all very human-like, but they are still sub-problems IMHO :)

Joof
This reminded me that there are other NN variants that might help provide some clues in that direction.

Hopfield nets for example provide associative memory.

It may not all be groundbreakingly efficient, but very worthwhile.

gavanwoolery
Yep - also we have a large tendency to use feedforward NNs right now, but I have a sneaking suspicion that the future lies in something closer to recurrent NNs. Or probably something more complex, like automata-ish properties (IIRC there is also some NN that uses Turing-like devices).
eli_gottlieb
Nah. If we want properly humanlike AI software, we're going to have to find a way to make inference in probabilistic programs a lot faster.
Cybiote
Realize that the computer's advantages can also lead to weaknesses. By this, I mean that a computer's powerful and precise memory means that it is better able to work off raw correlations, without as much of a need to abstract or seek out causal models. While this might turn out okay for detailed large but ultimately simple (stationary) patterns, it will not be so advantageous in more dynamic settings or scenarios with multiple levels of organization with differing dynamics.
jimfleming
For #2 you've touched on the "AI Effect"[0] or moving goal posts.

[0] https://en.wikipedia.org/wiki/AI_effect

daveguy
At some point a consensus of AI researchers will decide that we have a generally intelligent system (able to adapt and learn many tasks, pass a legitimate turing test, etc). Currently there are zero AI researchers that would claim such a thing even with our current breakthroughs in specialized tasks.

The moving goalpost has been less an issue of "what is AI" and more an issue of "what are difficult task at the edge of AI research". People with passing interest (even most programmers) don't distinguish between the two. Of course "difficult tasks in AI research" is a moving goalpost and it will be moving until we achieve general intelligence and beyond. This is a requirement for progress in AI research. If those goalposts stop moving before we have a general intelligence then something is wrong in the field.

When researchers (not the general public) start arguing whether the goalposts should be general intelligence or super intelligence that is when we know we have traditional AI. When we try to figure out how to get adult human level intelligence to take hours or days to train on the top supercomputers rather than months or years -- that is when we have AI. Even then, if the training part requires that much computational intensity, how many top supercomputers are going to be dedicated to having a single human level intelligence?

You could train current algorithms used in AI research for decades and have nothing resembling general human intelligence.

jimfleming
I agree, but I guess my point (in this comment and others) would be that we should stop thinking of intelligence, consciousness, free-will and other attributes as a hard line but rather gradients or quantities.
selectron
Human intelligence is basically the ability to solve a large collection of sub-problems. As models learn how to solve more and more sub-problems they become closer and closer to human intelligence. Right now the focus is on solving important specific sub-problems better humans, rather than the ability to solve a much wider variety of sub-problems.

Machine learning and human learning happen in much the same way. We have a dataset of memories, and we have a training dataset of results. We then classify things based on pattern matching. The current human advantage is an ability to store, acquire and access certain kinds of data more efficiently, which helps in solving a wider variety of problems. For problems in which machines have found out how to store, acquire and access data more efficiently (such as chess) machines are far superior to humans.

gavanwoolery
This is basically strong AI vs weak AI. I don't know what the ultimate solution is - it could be exactly as you describe. :) My theory is just that it will need to be generally applicable, on domains it is not trained on, if it is to reach human-level intelligence.
selectron
Human-level intelligence is not generally applicable on domains it is not trained on, so holding AI to this standard is ridiculous. Humans need to be taught just like machines do.
gavanwoolery
Yes, but there is cross-over of domains. For example, say you learned how to ride a bicycle. This might aid you in how fast you learn to ride a motorcycle, or vice versa. (Might be a bad example but I hope it illustrates the point)
pinouchon
You are referring to transfer learning
ThomPete
AI can do the same thing. What you are talking about is ex balance.

On top of that, once balance is learned it's instantly transferable to other "machines" where as each human has to learn it.

resu_nimda
Human intelligence is so much more than that. I feel like we vastly underestimate the problem when we make it sound so simple. "Oh well the machines are basically the same as us, so we just have to get them to be able to solve more sub-problems and then we've got it!"

Right now the focus is on solving important specific sub-problems better humans, rather than the ability to solve a much wider variety of sub-problems.

The focus is there because there are business applications and money there. Do researchers really think that some version of a chess-bot or go-bot or cat-image-bot or jeopardy-bot will just "wake up" one day when it reaches some threshold? That this approach is truly the best path to AGI?

A machine can play chess better than a human because the human used its knowledge to build a chess-playing machine. That's all it can do. It takes chess inputs and produces chess outputs. It doesn't know why it's playing chess. It didn't choose to learn chess because it seemed interesting or valuable. No machine has ever displayed any form of "agency." A chatbot that learns from a corpus of text and rehashes it to produce "realistic" text doesn't count either.

You could argue many of the same things about humans themselves. Consciousness is an illusion, we don't have true agency either, we also just rehash words we've heard - I believe these things. But it seems clear to me that what is going on inside a human brain is so far beyond what we have gotten machines to do. And a lot of that has to do with the fact that we underwent a developmental process of billions of years, being molded and built up for the specific purpose of surviving in our environment. Computers have none of that. We built a toy that can do some tricks. Compared to the absolute insanity of biological life, it's a joke. I think it is such hubris to say that we're anywhere close to figuring out how to make something that rivals our own intelligence, which itself is well beyond our comprehension.

justsaysmthng
From the presentation: "There's a lot of talk about how AI is going to totally replace humans... (But) I like to think that AI is going to actually make humans better at what they do ..."

Then immediately he continues that

"So it turns out that using deep learning techniques we've already gotten to better than human performance [....] at these highly complex tasks that used to take highly, highly trained individuals... These are perfect examples of how deep learning can get to better than human performance... 'cause they're just taking data and they're making categories.."

I think that brushing off the dramatic social changes that this technology will catalyze is irresponsible.

One application developed by one startup in California (or wherever) could make tens of millions of people redundant all over the world overnight.

How will deep learning apps affect the healthcare systems all over the world? What about IT, design, music, financial, transportation, postal services... nearly every field will be affected by it.

Who should the affected people turn to ? Their respective states ? The politicians ? Take up arms and make a revolution ?

My point is that technologists should be ready to answer these questions.

We can't just outsource these problems to other layers of society - after all, they're one step behind the innovation and the consequence of technology is only visible after it's already deeply rooted in our daily habits.

We should become more involved in the political process all over the world (!) - at least with some practical advice to how the lawmakers should adapt the laws of their countries to avoid economic or social disturbances due to the introduction of a certain global AI system.

choosername
At that point it's a problem for AI to solve. When supply outweighs demand, selection will happen and that will be fueled by analytics. The incentive to catching the rest in a soft social net then is uncertain. In those terms, surely, politics aren't always benevolent. The trend of the spread between the rich and the poor growing is significant.
eva1984
I resonate with this...but the progress will not stop. If US is not going to do this, China will, Japan will, and then US will need to follow then import it. There is no way back. How it will shift the society, we will need to see. I don't think technologists can control it.
None
None
vonklaus
I just listened to Altman discussing this at a recent interview and it is obviously something he thought a lot about. If you can't watch the video he says that ai is an amazing resource and YC is funding a basic income study in oakland as well as open ai. He also points out that doing a task that a machine or computer do is pointless. I find myself agreeing with what I believe his conclusion was;

we (humans) want ai and will all be better off for it. we are heading into a shift on the order of the industrial revolution and the best course is unknowable. we should harness this technology, try to distribute it bu collaborative building & information dissemination and study the idea of personal fullfillment & a basic quality of life.

In short, the answer is definitely not to stop working on it because it is just basic game theory that someone/entity will not. We need to leverage to fix the problem it creates & learn how to allocate our resources by studying this immeadiately.

http://m.youtube.com/watch?v=FuijDaj8DvA

justsaysmthng
> He also points out that doing a task that a machine or computer do is pointless.

This is true for some tasks - but not true for others, even if AI could in theory do them better.

For example, a medical doctor. It is easy to see this as just a job, but it's so much more than that - it's the culmination of 20 years of studies, a childhood dream, the feeling of being important and useful to society, the "thank you" from the patient, the social circle, the social status...

There are many people who actually enjoy their jobs, because it gives them meaning and satisfaction.

Another example - musicians. I see so many "AI composer" projects out there - algorithms which compose music... I think these people are kind of missing the point ..

It's easy to see music as notes and tempo, but it is much more than that. It is a medium, a tool, through which the listener can connect to the artist and experience his emotional state: https://www.youtube.com/watch?v=1kPXw6YaCEY

Having an algorithm on the other side feels so fake.. artificial..

armitron
Only because your mental map is old, archaic and will not allow you to see things any other way.

So what is the right course according to you? Do we set an artificial threshold at some point in the future, where these jobs will be swapped to AI? Do we simply never replace them with AI?

Do we slow progress in order to cater to people's obsolete expectations (don't want to hurt their feelings after all).

No. Problems will appear and we will solve them. It is our duty to the universe, to ride the wave and see it all the way through.

EDIT: Virtual Reality will solve a lot of the "meaning of life/emotional satisfaction" issues that creep up. And VR is getting a jump .... right about now. It's quite amusing, just thinking about the timing.

justsaysmthng
> Only because your mental map is old, archaic and will not allow you to see things any other way.

I've done my fair share of acid and mushroom trips, explored the DMT hyperspace dozens of times, travelled through time and space into alternate realities, met and communicated with biological and machine beings and have thoroughly explored our planet.

My mind is wide open :)

Since you mentioned the Universe, then it would help to remember that in "galactic" terms, we've just climbed down from the trees a couple of hundred thousand years ago and our technology is still extremely primitive and it is not at all a proven fact that we can survive it.

More down to Earth, what I suggested in the parent post is that people who are bringing this technology forward should also be the people who come up with the political and social proposals for changes necessary to accommodate it.

We can't just unleash these "technological atomic bombs", which fundamentally change the social game and then expect the politicians to handle the fallout. Tech people and scientists need not stay on the sidelines any more - they must be the designers not only of technology, but of the social system too, since the two are merging anyway...

Of course this can't be "enforced", rather, it's an ethical thing to do on the part of tech companies, justified by the fact that deep down, our motivation is to make the world "better".

armitron
Yes and this will without a doubt take place too. In fact, some would say it's already happening.

My point is that the process, if we might call it that, will be osmotic and not discrete. Of course there is also the notion of people believing that there is a layer of central planning somewhere and that "we" (groups, organizations, nation states) actually have high-level organizational control over what is happening.. Watching Taleb talk about tail events will quickly put these notions to the ground.

We shouldn't waste time inventing such control when there is none or even inventing the illusion that such control will be effective. All we can do, I feel, is guide.

autokad
We will be closer to cracking neural nets and are closer to the singularity when we can train a net on two completely different tasks and each task can make other predictions subsequently better. IE: train / test it on spam / ~ spam emails, then train the same net with twitter data male / female.
gavanwoolery
I agree - this would be general intelligence. You can adjust weights for one problem very specifically to some degree of success, but having that affect a different slightly related problem (and even being able to distinguish between two differing problems) is going to require some rethinking, or enormous compute power exponentially beyond what we have now.
nl
You can do this now(?!)

In fact something as simple as naive Bayes will work reasonably well for that.

I'm not sure if you are aware, but in (say) image classification it's pretty common to take a pre-trained net, lock the values except for the last layer, and then retrain that for a new classification task. You can even drop the last layer entirely and use a SVM and get entirely adequate results in many domains.

Here's an example using VGG and Keras: http://blog.keras.io/building-powerful-image-classification-...

And here a similar thing for Inception and TensorFlow: https://www.tensorflow.org/versions/r0.8/how_tos/image_retra...

dharma1
Transfer learning at the moment works within one domain (say images), because the low level shapes are still similar, but not between different domains of data
nl
Sure, that's what I was pointing out.

However: Zero-Shot Learning Through Cross-Modal Transfer http://arxiv.org/abs/1301.3666

31reasons
One of the main challenges in Deep Learning is that it requires massive amounts of data, orders of magnitude more data than a human toddler to detect a cat. It could be a great area of research on how to reduce the amount of data it takes to train the network.

One main thing it lacks is imagination. Humans can learn things and can imagine different combinations of those things. For example, if I ask you to imagine a Guitar playing Dolphin, you could imagine it and even recognize it from a cartoon even though you have never seen it in your life before. Not so for Deep Learning, unless you provide massive amount of images of Dolphins playing guitars.

lars
> One main thing it lacks is imagination. [...] For example, if I ask you to imagine a Guitar playing Dolphin, you could imagine it and even recognize it from a cartoon even though you have never seen it in your life before. Not so for Deep Learning

These days one should be careful when claiming things deep learning can't do. There are in fact systems that can imagine things it has never seen before.

Here's an example. It takes any text, and imagines an image for it: https://github.com/emansim/text2image

Here's a more recent example, with even higher quality images: http://arxiv.org/pdf/1605.05396v2.pdf

lkozma
To make that comparison fair, maybe you should also count the amount of data used during the human toddler's evolution when the structure of the brain became what it is.
pawelwentpawel
Interesting! Made me wonder - how would we compare the data that neural net receives with the data that toddler's visual cortex is getting?

On a purely visual level - let's say we have 10k of static 32x32 images defining a class of cat. Or even more of them plus some negative examples. Each image is a different cat, in a different position (they're incredibly flexible creatures). Having so many cases we should be trying to make some kind of a generalization of what 1024 pixels of a cat should look like.

A family with a toddler has only one cat. From that one example, he learns the concept of what a cat is and is able to generalize it when in different situations. But a toddler has 2 eyes, his visual input is stereo. Even if he sees just one cat, it's not a static image interaction. The input is temporal, he can see the cat moving and interacting with the environment.

camikazeg
So instead of 10k static 32x32 random images of cats for training data, why not do 10k frames of cat videos that add up to seeing the cat moving and interacting with the environment?
dharma1
I think stereo vision isn't super important for this - people blind in one eye don't do noticeably worse. Not to say depth doesn't help for segmentation or learning concepts but I don't think it's the key
mattnewton
Wouldn't we just need dolphins and guitars separately with something like https://github.com/awentzonline/image-analogies ?
31reasons
Thats amazing. Yes something like that but for concepts. Merging Dolphin + Guitar Playing. Both have visual elements to it but also has conceptual elements.
mattnewton
To me though, this proves that it is merely a difference of degree and not of kind. You could do this with short stories about guitar dolphins, as many images as you'd like, maybe a song, and even videos someday with enough compute.
rryan
> It could be a great area of research on how to reduce the amount of data it takes to train the network.

This is typically referred to as "one-shot learning" in the literature -- and people certainly work on it!

rando18423
In a way, I think you just described the fundamental difference between "intelligence" and "very advanced applied statistical analysis." :)
visarga
> orders of magnitude more data than a human toddler to detect a cat

Perception works at 10-20 frames per second, all day long. That means 0.5 million perceptions per day. Why would a small neural net that has 1/1000 the experience of a child and 1/1000 the size of a brain (assuming it is 100 million neurons which is huge!) be able to be more accurate?

What you are referring to is called "one shot learning" and is the ability to learn from a single example, which is being studied in the literature.

epberry
Pretty basic stuff - the history portion was more interesting than any of the content that followed. Anyone who's been paying the slightest attention in the last few years will be familiar with all of the examples used in the podcast.

On a side note I always admire the polish of the content that comes out of a16z - its typically very well put together.

None
None
vj_2016
http://www.videojots.com/davos/state_of_ai.html#2181

Apparently, robots still struggle to pick up things?

None
None
aficionado
Basically this video ignores the history of machine learning in general. Jumping from Expert Systems to Neural Networks and Deep Learning is actually ignoring 36 years (and billions of dollars) of research http://machinelearning.org/icml.html (Breiman, Quinlan, Mitchell, Dietterich, Domingos, etc). Calling 2012 the seminal moment of Deep Learning is quite hard to digest. Maybe it means that 2012 is the point in time when the VC community discovered machine learning? Even harder to digest is calling Deep Learning the most productive and accurate machine learning system. What about more business oriented domains (without unstructured inputs), the extreme difficulties and expertise required to fine tune a network for a specific problem, or some drawbacks like the ones explained by http://arxiv.org/pdf/1412.1897v2.pdf or http://cs.nyu.edu/~zaremba/docs/understanding.pdf.

Those who ignore history are doomed to repeat it. As Roger Schank pointed out recently http://www.rogerschank.com/fraudulent-claims-made-by-IBM-abo..., another AI winter is coming soon! Funny that the video details the three first AI winters but the author doesn't realize that this excessive enthusiasm in one particular technique is contributing to a new one!

gavanwoolery
First, would have to agree that there is quite a bit of history being ignored, but I suspect this deck is more to excite and interest investors than anything else.

The value of AI depends largely on perceived value IMHO, and the frequency of "winters" will correlate with that. I think we are still a bit too early for VR to really take off, but that did not stop a $2b acquisition and loads of investor interest. This will probably artificially constrain what should be an AI winter right now, just because so much money is continuing to go into it.

I personally applaud that so much enthusiasm is going into AI right now, and though we are repeating history to some extent I still think we are making incremental advancements (however small) - even if this just means applying old AI techniques to new advancements in hardware.

jensv
It's ridiculous but xkcd's "Tasks" goes over how in CS, it can be hard to explain the difference between the easy and the virtually impossible. https://www.quora.com/As-a-programmer-what-are-some-of-the-t...
deprave
I think you're spot on with the observation that 2012 refers to when VCs discovered machine learning. Anyone who has recently interacted with VCs will tell you that they look for anything to do with machine learning (and VR/AR/MR), even when it makes no sense. There are going to be some companies who will be able to leverage machine learning to advance their business, namely, Google/Facebook who will probably claim they can offer better targeted advertising and such. Most other players who merely try to force machine learning on other fields are likely to realize that while the technology is cool, it's still too early for it to be generally applicable to "any" problem.

Especially dangerous is going to be the mix of machine learning with healthcare. I believe Theranos tried it and found out it's not that easy... I'd watch this space with skepticism.

gumby
> Especially dangerous is going to be the mix of machine learning with healthcare.

Medical diagnosis has been one of the primary application areas of AI since the 70s (maybe earlier, I can't remember off the top of my head). The widespread non-availability of automatic doctors should tell you how well that has worked :-(.

Coincidentally enough I worked both in AI (1980s) and drug development (2000s) and now really understand how hard it is!

I do believe we will soon see automated radiology analysis as it is likely to appear to be most amenable to automated analysis. Presumably in Asia first as the US FDA will justifiably require a lot of validation. The opportunity for silent defect is quite high -- you are right to say "especially dangerous"

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.