HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Jeff Dean’s Lecture for YC AI

blog.ycombinator.com · 439 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention blog.ycombinator.com's video "Jeff Dean’s Lecture for YC AI".
Watch on blog.ycombinator.com [↗]
blog.ycombinator.com Summary
Jeff Dean is a Google Senior Fellow in the Research Group, where he leads the Google Brain project. He spoke to the YC AI group this summer. Watch the talk and read his slides here.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Aug 07, 2017 · 439 points, 56 comments · submitted by danicgross
litzer
As somebody who's recently starting to learn more about ML, a lot of the work of an ML engineer does seem to be automate-able (not doing research or pushing boundaries but just applying ML to some product need). For example, choosing hyperparameters, evaluating which features to collect, etc seem to be things that can be automated with very little human input.

His slide on "learning to learn" has a goal of removing the ML expert in the equation. Can somebody who's more of an expert in the field comment on how plausible it is? Specifically, in the near future, will we only need ML people who do research, due to the application being so trivial to do once automated?

halflings
You will still need data engineers to build the whole data ingestion and processing pipeline (although that can be easy if standardised tools are available, such as spark, it's still a challenge in many cases).
davedx
I am working on solving this problem at the moment - I'm building a product that lets anyone build the ETL pipelines that produce inputs for a ML model. If anyone's interested in beta access (coming month or two) let me know, [email protected]
litzer
Right, but I'd consider that falling closer to the realm of general software engineering -- similar to tasks of collecting analytics of users or building infrastructure to get data from point A to point B.

Maybe that currently is some parts of the job of an ML engineer. But if that's the only part, I don't think that role should be called one of ML engineer anymore

wenc
There is one job that is still difficult for a machine to do well (although machines are improving): feature engineering.

ML works very well in bounded/closed domains like image and sound recognition. Open-domains are much more challenging.

Building predictive models from data in specialized domains often require insight, which machines cannot provide. For instance, let's say you collect a bunch of data and are trying to predict sales. You need to apply domain knowledge, experience and intuition to know what variables are causal or correlative. If you just throw all the variables into the mix and build a model from that, you will end up with a model that overfits badly.

There are automated "variable selection" techniques that can help to prevent overfit, but they are mostly imperfect because machines can only detect correlation and not causation. Also, many regression/classification techniques are easily fooled by noise and highly nonlinear relationships. We did some work a few years ago comparing predictive models built from a ton of sensor data (with automated variable selection) vs. one that was parsimonious that was built on select data that we knew accounted for 80% of the effect. The latter model was far superior. Noise/non-causal variables often don't just "wash out" even with very good variable selection algorithms.

It takes domain knowledge to figure out what variables matter and what variables don't.

pakl
What was the architecture of your predictive model? Was it designed to learn the underlying physical dynamics from the tons of sensor data?
wenc
It was a hybrid of several algorithms. Yes, it was an adaptive model trained with a large historical dataset and updated daily.
freddealmeida
i think not. any publications you could point to?
wenc
I'm not sure what the "I think not" was in response to. This was an industrial application, so there were no publications.
sputknick
If Tensorflow becomes the default library for Deep learning, is this a good thing or bad thing? Does it help in that all researchers can focus on what's important (the data and results) or does it hurt in that Google now controls an important paradigm for the next generation of computing?
halflings
I fail to see what's wrong with this: it's open-source, and anybody can write different libraries, or libraries that use all the low-level work ($$$ paying experienced/skilled engineers) that Google pours into making tensorflow.

Alternatives exist (PyTorch by facebook, theano, caffe, etc.) in any case, and the best solution wins. (in the same way scikit-learn once eclipsed all other ML libraries, but some would still use xgboost which had superior Gradient Boosting Trees).

sillysaurus3
the best solution wins

Minor quibble, but: not really. The earliest solution wins, e.g. rails. It has to be good enough, but after a certain point "best" stops mattering.

omot
> The earliest solution wins

Minor minor quibble, but: not really. The earliest solution wins, unless there is a 10x better solution, e.g Google. There were a lot of good enough options, but after a certain benchmark "earliest" stops mattering.

ehsankia
Absolutely. To me, it's very similar to programming languages. At the end of the day, it doesn't really matter if people use Google's Go, C++, or some other language. What matters is what's produced by it: the actual research and results.

When you see a useful program that helps you be much more productive, you don't really care if it was coded in c++ or python.

The one part Google does care about is being able to sell Cloud TPU, but I don't think that by using this tool you're instantly in under their control.

davedx
Most of the time I agree with this, but some areas (where e.g. Google make Dart but also make Chrome), I think there are risks.
agibsonccc
I personally think this would lead to stagnation in the space. Because of competition, tensorflow is adding better ETL tools, dynamic computation graphs,..

I would be careful about having only 1 company's use cases served. Competition forces parties to innovate.

For example mxnet and cntk or even more recently caffe2 serve different use cases and companies.

Giving you a recent one with our deep learning framework (I compete with all the above frameworks): We compete in the big data niche and recently found a few bottlenecks we missed because people run comparisons.

li4ick
Researchers use Tensorflow for backend only + Keras for implementation. However, PyTorch and Caffe are much more popular among researchers. I personally enjoy PyTorch a lot.
niyazpk
>> I personally enjoy PyTorch a lot

Why?

Hyperbolic
A popular framework for iterative control sequences in deep learning (e.g. Stack LSTM) is Dynet. It's being used more and more in the NLP community.
harigov
I believe their game plan is more hardware based differentiation rather than software alone. From that perspective, TensorFlow will probably play a similar role to Google's Cloud that DirectX played to Microsoft Windows. As other cloud vendors aren't THAT far behind (notably Microsoft), I don't believe Google will win with that high of a margin, if it wins.
TuringNYC
I think it is an incredibly positive thing that TensorFlow, Caffe2, Torch and multiple open-source systems are competing actively in the space. These are all active projects, with open-access support facilities. This is all of enormous societal value and we all reap the benefits in numerous, usually invisible, ways.
tanilama
It won't, however it won't go away either.
None
None
blueyes
PyTorch is becoming much more popular among researchers, even if TensorFlow use is widespread among data scientists.

Would it be bad if TensorFlow became the default library for deep learning? For companies that aren't Google, yes. Many organizations don't like how Google's controlled Android development.

But I think an important part of Dean's lecture are the slides that talk about making ML/DL expertise obsolete. That's the future, whether you use TensorFlow or not.

londons_explore
Android is a very closed opensource project. For example, if you write a pull request to add a feature as an outsider, the chances of it getting into the main project is very low.

Chromium is very open. In many cases, Google has to ask permission from outside contributors to commit code, since the outside contributors are the "boss" of part of the codebase.

Tensorflow is closer to chromiums model. There are a lot of external contributors.

svara
> PyTorch is becoming much more popular among researchers, even if TensorFlow use is widespread among data scientists.

Could you maybe comment on why this is the case? Are there technical reasons for that?

agibsonccc
A big reason is the dynamic computation graphs. Tensorflow fold is supposed to be the response to that, but overall people are finding pytorch to be more flexible.

(Warning: I'm biased. I compete VERY heavily with anything google including cloud and their deep learning framework.)

deepnotderp
It's cute, but isn't going to completely obsolete ML/DL expertise. (And believe me, as a prospective supplier of DL compute, I would love for the answer to be "more compute").

To give you an idea of how much compute it took, they spent two million dollars for one run on a relatively toy dataset, CIFAR-10. Imagine how much it would take on an imagenet sized dataset! Can your company afford ~$20 million+ per dataset?

I do think that hyperparameter twiddling aspect might get automated, but believe me, that is much welcomed by the DL research community! I would much rather spend my time on new ideas rather than trying ten different initializations :)

blueyes
Sure, but that's $2m of today's compute on today's chips. The constraints on chips and cost are moving, and Google is pushing them.
deepnotderp
Sure, so it becomes ~$2 million for an imagenet scale dataset with TPUs and ~$200K with our chips. Still pretty expensive :)
hallman76
As a ML enthusiast, this is incredible to watch!

I'm completely blown away that Google was working on full-scale physical architectures that were optimized for these problems. Talk about being two steps ahead of the game!

londons_explore
Smart people, combined with piles of cash, gets you ahead at almost anything...

Sadly there's still a lot of smart people without access to cash (academics), and piles of cash without smart people to get the value out of it.

bluetwo
If a doctor misdiagnosis an eye ailment, they might end up with a malpractice lawsuit. If an ML program misdiagnoses an eye ailment, what is going to happen?
TuringNYC
I'm working on this full-time. We sell our model as a pre-diagnosis tool. We tune the parameters to ensure low false negatives (less than human doctor), though the false positives are higher. However, it whittles down the queue substantially. For non-clinical uses, depending on your use case, you can actually skip the negatives designated by the model. For clinical use cases, you might use it to prioritize your read queue and/or highlight areas of interest.

Feel free to reach out to me if you are curious. I can talk in great detail.

BTW, you can purchase insurance for activities like these, though I cannot say I've seen a legal case yet, so I dont know exactly how it would work out.

bluetwo
Thanks for the info. I don't want anyone to get hurt but it was only a matter of time before there is a suit and I am curious how this will play out.

I don't buy the argument that it is the same as any other equipment, although I can see companies trying to push the liability on the healthcare provider as the one making the final call.

dguaraglia
This is why these tools are generally described as "decision support" systems and not "diagnostics" systems. There are many reasons why you don't want to be classified as a diagnosis device (FDA approval is a big one), but ultimately letting the doctor make the final decision removes the ML controversy out of the picture.

EDIT, for context: https://en.wikipedia.org/wiki/Clinical_decision_support_syst...

Jedi72
How long until Doctors spend 10+ years studying only to sit in the chair and copy-paste diagnosis from software as some kind of legal blame game piece?
killjoywashere
Any doctor that can be replaced be a computer, should be.
faceplanted
You should see how long radiologists train to push a button.

I got that joke from a radiologist.

otoburb
The commercial entity backing the ML program should be responsible. If one doesn't exist then the comparison fails, as one would presumably not be willing to trust the competency of a public domain ML program without a true, commercially and medically backed "second" opinion (algorithmic or otherwise).

Perhaps one day, ML programs running eye diagnostics will be as cheap and disposable to use as pregnancy stick tests: "Point and diagnose."

solomatov
Most likely all these models have TOS or EULAs which shift liability in such situations.
TuringNYC
Yes, the commercial entity backing the ML program, and possibly multiple associated entities would be held responsible, or at least dragged into litigation. You can get insurance companies to write custom policies for such things, but I dont think there has yet been a standards-setting case -- anyone know of existing or in-progress case law in this arena?
xxSparkleSxx
If a doctor misdiangosis someone based on an outside errant lab test, what happens?

I think the situations would be viewed identically. I don't know the exact processes, but lots of documentation would go into it and if the lab that gave the errant lab test has a history of poor results, they may get investigated and closed down.

If this was a one-off error, that will get chalked up to a one-off error. Even if the patient dies.

londons_explore
The AI program would say "I think there is a 99% chance you have this eye ailment."

If that turns out to be wrong, the programmer will just point to the other 99 people successfully treated and shrug.

sohilv
AI will be tool for Doctors & patient as 2nd opinion on diagnosis. And at least it will be the initial diagnosis tool where doctors are not present.
melling
If a car has to decide between killing ...

The trolley problem: https://www.youtube.com/watch?v=21EiKfQYZXc&feature=youtu.be...

robertelder
In this case I think most jurisdictions would treat the AI as just another tool or piece of equipment.

If you asked a similar question about who would take the blame if a piece of X-ray equipment was not working properly and caused some form of harm, the liability might fall on the X-ray machine manufacturer who guaranteed a certain level of accuracy, or it might fall on the hospital that procured the (potentially unsafe/certified) equipment. It might also fall on the physician who wasn't following the proper protocol for using the equipment.

spynxic
The coverage of chain-in-command in this answer is so on point. One could literally follow each sentence to determine the culprit of any issue. Bravo - A+ in troubleshooting.
bluetwo
Sure, but a jury doesn't have to follow the logical chain of events. Look at the recent talcum powder lawsuits. No solid evidence it causes cancer, yet multiple million dollar lawsuits against J&J saying that it did.
deboflo
Once, in early 2002, when the index servers went down, Jeff Dean answered user queries manually for two hours. Evals showed a quality improvement of 5 points.
1_2__4
I never thought Jeff Dean jokes would be a shibboleth but here we are.
nickelbox
I could imagine there being external Jeff Dean fans, but yeah, that jargon
ma2rten
https://www.quora.com/What-are-all-the-Jeff-Dean-facts
None
None
iandanforth
The notion of running one giant model that has many sub-talents is epic. I can imagine that all the disparate models they run today could fuse into a giant network that melds predictions and guides computation as required by the task. That seems like a very Jeff Dean scale endeavor.
erikpukinskis
It's a great idea, and human beings are proof that conjoined colonies of intelligent agents are going to be smarter than individuals. But it also runs up against tradeoffs, as evidenced by the fact that many humans exist, and we didn't evolve into just one giant ur-human, even after we all got our internet implant. Sometimes a few agents with different strategies working independently is better than a single agent choosing amongst the strategies of several others.
soVeryTired
Sounds like a joy to debug
cosminro
He is referring to this paper https://research.googleblog.com/2017/06/multimodel-multi-tas...

Where they used one model to do image recognition, speech recognition and translation in the same network.

None
None
shpx
He also mentions sparsely activating only the neurons that matter, which they explore in https://arxiv.org/pdf/1701.06538.pdf

Personally I didn't find it very satisfying, I imagine something more fundamental and self referential.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.