HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Lecture 1 | Machine Learning (Stanford)

Stanford · Youtube · 87 HN points · 17 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Stanford's video "Lecture 1 | Machine Learning (Stanford)".
Youtube Summary
Lecture by Professor Andrew Ng for Machine Learning (CS 229) in the Stanford Computer Science department. Professor Ng provides an overview of the course in this introductory meeting.

This course provides a broad introduction to machine learning and statistical pattern recognition. Topics include supervised learning, unsupervised learning, learning theory, reinforcement learning and adaptive control. Recent applications of machine learning, such as to robotic control, data mining, autonomous navigation, bioinformatics, speech recognition, and text and web data processing are also discussed.

Complete Playlist for the Course:
http://www.youtube.com/view_play_list?p=A89DCFA6ADACE599

CS 229 Course Website:
http://www.stanford.edu/class/cs229/

Stanford University:
http://www.stanford.edu/

Stanford University Channel on YouTube:
http://www.youtube.com/stanford
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
I hope everyone was referring to Andrew's blackboard course:

https://www.youtube.com/watch?v=UzxYlbK2c7E&list=PLA89DCFA6A...

instead of the one that started Coursera:

https://www.youtube.com/watch?v=PPLop4L2eGk&list=PLLssT5z_Ds...

Machine Learning:

* https://www.youtube.com/watch?v=UzxYlbK2c7E: Andrew Ng's machine Learning course, the recommended entry point by most people

* https://mlcourse.ai/ : More kaggle focused, but also more modern and has interesting projects

Do both courses simultaneously, take good notes, write useful flashcards, and above all do all the exercises and projects

Deep Learning

* https://www.fast.ai/ - Very hands-on, begin with " Practical Deep Learning for Coders" and then "Advanced Deep Learning for coders"

* https://www.coursera.org/specializations/deep-learning : More bottom-up approach, helps to understand the theory better

Do those two courses in parallel (you can try 2 weeks of coursera followed by one of fastai in the beginning, and then just alternate between them), take notes, write good flashcards and above all do the exercises and projects.

After that you will be done with the beginning, your next step will depend on what area interested you the most, and getting way too many resources right now can be extremely confusing, so I would recommend doing a follow-up post after you worked through the above resources. Also as non-ML stuff I recommend Scott Young's Ultralearning and Azeria's self improvement posts (https://azeria-labs.com/the-importance-of-deep-work-the-30-h...)

It is also all free on youtube: https://www.youtube.com/watch?v=UzxYlbK2c7E
marrowgari
This is great. But I find the advantage of Coursera is that it incorporates quizzes and programming homework into the lectures reinforcing the learning. Also, the material is updated and more relevant to today's ML problems then his 2008 lectures on youtube.
wuliwong
They also have it all on Stanford's site with some other information and course materials.

https://see.stanford.edu/Course/CS229

there are some lectures about machine learning on youtube. I think they are good to watch: https://www.youtube.com/watch?v=mbyG85GZ0PI&list=PLBkvosL9bM... and https://www.youtube.com/watch?v=UzxYlbK2c7E&index=2&list=PLB... and my favorite channels are Standford and MIT open courseware https://www.youtube.com/channel/UC-EnprmCZ3OXyAoG7vjVNCA and https://www.youtube.com/user/MIT
Mar 11, 2016 · har777 on Hard Tech is Back
Dont do the coursera one. Try this one instead: https://www.youtube.com/watch?v=UzxYlbK2c7E&list=PLA89DCFA6A...

Follow up with cs224/cs231 if interested.

Watch Ng at 57 minutes discuss the "cocktail party" problem and present an amazing solution: http://www.youtube.com/watch?v=UzxYlbK2c7E&t=57m0s
I'd suggest going back to the original youtube[1] and course materials[2]. Coursera version is nothing but a hand-wavy watered down "feel good" version of the original class. I also really like the Caltech's take "Learning from data"[3]

[1] https://www.youtube.com/watch?v=UzxYlbK2c7E

[2] http://cs229.stanford.edu/

[3] http://work.caltech.edu/telecourse.html

etherealG
sorry but i disagree. the in video questions, the randomised problem sets, all the coursera stuff really helped me to learn the ideas.
Nope, it's still around on youtube and on the Stanford Engineering Everywhere site. The coursera version of the class is much more introductory and skips significant parts of the full Stanford version.

http://www.youtube.com/watch?v=UzxYlbK2c7E http://see.stanford.edu/see/courseInfo.aspx?coll=348ca38a-3a...

One caveat though: the Coursera ML course is CS229A(pplied), which focuses on applying machine learning, rather than the mathematics behind it. The "real" CS229 lectures are on YouTube [1], and go much more in depth.

1. http://www.youtube.com/watch?v=UzxYlbK2c7E

I bet ~5% will complete the course

here is the number of views for Andrew Ng's Intro to machine learning course on youtube (http://www.youtube.com/watch?v=UzxYlbK2c7E )

lec 1, 206612

lec 2, 91234

lec 3, 49000

lec 4, 36823

lec 5, 27782

lec 6, 26347

lec 7, 22075

lec 8, 20713

lec 9, 15665

lec 10, 14142

lec 11, 16573

lec 12, 14296

lec 13, 12401

lec 14, 15022

lec 15, 12290

lec 16, 10760

lec 17, 8986

lec 18, 13639

lec 19, 10219

lec 20, 11373

czzarr
thats a ton more people than the usual number of stanford enrolled students, it's a win for education in my book.
nplusone
These lectures are also available on iTunes U. It would be interesting to compare the number of YouTube views to the number of iTunes downloads. I watched the first two classes on YouTube, and downloaded the rest on iTunes, though admittedly I'm still only on lecture 6.
jmilloy
This sort of thinking is supported also by the fact that the number of views is not strictly decreasing.
Thanks for posting the other two course. I can't wait till October. The Machine Learning course has a great a playlist on youtube as well.

http://www.youtube.com/watch?v=UzxYlbK2c7E&playnext=1...

This reminds me of CS229, of which there's an online version: http://www.youtube.com/watch?v=UzxYlbK2c7E

It's considerable more focused in scope, presenting the mathematics behind some of the more popular algorithms extensively used in machine learning, which is a subset of artificial intelligence. The course starts off pretty slow, but quickly gains speed and momentum. By the end, you should be fairly comfortable with clustering and classification/regression, among other topics. The lecture notes are also fantastic.

diminish
Yes this one is quite comprehensible too. Thanks for the link. Any other links you may suggest?
ExxonValdeez
Stanford CS majors who take the AI track are required to take both CS 229 and this course, CS 221, so it definitely presents useful materials and concepts.
First, finish lectures by Professor Gilbert Strang. http://web.mit.edu/18.06/www/

To my memory, session notes of CS229 is good enough for understanding SVM and gaussian distributions. Also watch youtube videos. http://www.stanford.edu/class/cs229/materials.html http://www.youtube.com/watch?v=UzxYlbK2c7E

If you just want to use the libraries, you can stop here.

If you want to know more, read chapters 1-3 of nonlinear programming by Professor Dimitri Bertsekas before convex optimization. http://www.athenasc.com/nonlinbook.html

Then, you can try to finish EE364 and watch the videos. http://www.stanford.edu/class/ee364a/ http://www.youtube.com/watch?v=McLq1hEq3UY

If you want to roll your own algorithms, you have to know some optimization tools. http://cvxr.com/cvx/

And there is some statistics knowledge you have to fill in. I used these: http://www.stat.umn.edu/geyer/5101/ http://www.stat.umn.edu/geyer/5102/ R is used in the courses.

phektus
Thanks! For an achievable short term goal I just wish to use the libraries first so I can roll my own simple apps. This way I get to learn the basics while making use of what I already know (build web apps). An integration of sorts, should keep me motivated all throughout. Eventually I'll go deeper, and will definitely work on the advanced topics you posted.
Feb 23, 2011 · 8 points, 0 comments · submitted by natsel
Build a better chess rating system and enter your system into the following competition: http://kaggle.com/chess.

You may want to use machine learning techniques, which you can learn using the Andrew Ng's Stanford lectures (http://www.youtube.com/watch?v=UzxYlbK2c7E&feature=chann...).

Jun 04, 2010 · 79 points, 17 comments · submitted by helwr
silkodyssey
Additional content (transcripts/handouts/assignments) are available at the Standford website.

http://see.stanford.edu/see/lecturelist.aspx?coll=348ca38a-3...

kogir
Andrew Ng is great. He's worked on some really cool and practical stuff. Check out his projects:

http://www.cs.stanford.edu/people/ang/research.html

I helped build parts of the hardware and control software for Retiarius and the Snake robot:

http://www.cs.stanford.edu/people/ang/rl-videos/

imp
Some people have been working through this class together on Curious Reef. It's loosely organized, with people posting questions and ideas in the class forum: http://curiousreef.com/class/stanford-cs229-machine-learning... Might be a useful resource to people learning this material. (Disclosure: it's my website)
brown9-2
Thanks for posting this. The course is on iTunesU also, if anyone wants to download the files to sync to your iPod or iPhone.
mmaunder
I was hoping for a video thick with data I could apply or use to Google and learn more.
None
None
jules
This is good but not excellent. Too much theory motivated by theory motivating more theory, whereas in the real world the theory is motivated by and usually invented after practice.
jey
Huh? You use the theory to figure out what to implement in practice.

The idea of just randomly hacking some shit together then backfitting a theory onto it is absurd. That's the same strategy that led to many of the past failures of "AI" -- approaches based too much on intuition that wasn't theoretically well grounded.

Probability theory and statistical models are foundational material for anyone interested in machine learning.

moultano
I don't agree with the parent, but I also don't think backfitting the theory is absurd.

What I've discovered from using machine learning in practice is that it's far more important to degrade gracefully when you have little data than to do the theoretically best thing when you have a lot of data. What this ends up meaning is that a hacky thing that is somewhat reasonable but based on the realities of the data will usually perform better than something more sophisticated that made too many simplifying assumptions along the way.

(That said, stats is amazing, and is the most important thing to learn for anyone getting into machine learning.)

madmanslitany
There's a great applicable quote from Nikola Tesla here: “If Edison had a needle to find in a haystack, he would proceed at once with the diligence of the bee to examine straw after straw until he found the object of his search. I was a sorry witness of such doings, knowing that a little theory and calculation would have saved him ninety per cent of his labor.”
gjm11
As I just pointed out to someone else who quoted the exact same thing elsewhere on HN, that approach seemed to work out pretty well for Edison.
physcab
I disagree. What you just described is the definition of a crackpot. If you want to achieve real results you need to have a firm understanding of the fundamentals, then you learn how to extrapolate. Besides, when learning theory you often undertake case studies that allow you see what real world problems you can actually apply these theories to.

If you watch the lectures, Professor Ng does a good job of showcasing projects that benefit from various algorithms.

_debug_
Can you please suggest alternative university courses / reading material / progression?
herdrick
Do you know of any better lectures?
apurva
well there are loads of lectures on videolectures.net definitely worth a view, in particular those from machine learning summer school(primarily because audio quality tends to be pretty sucky in some of the other ones)...
dlo
Are you watching the same thing I'm watching?

"So I have a friend who teaches math at a different university, not at Stanford, and when you talk to him about his work and what he's really out to do, this friend of mine will — he's a math professor, right? — this friend of mine will sort of get the look of wonder in his eyes, and he'll tell you about how in his mathematical work, he feels like he's discovering truth and beauty in the universe. And he says it in sort of a really touching, sincere way, and then he has this — you can see it in his eyes — he has this deep appreciation of the truth and beauty in the universe as revealed to him by the math he does.

"In this class, I'm not gonna do any truth and beauty. In this class, I'm gonna talk about learning theory to try to convey to you an understanding of how and why learning algorithms work so that we can apply these learning algorithms as effectively as possible."

mindcrime
There's a pile of great AI/ML content up on Youtube as well. IIT has posted two cool AI series that HN'ers might like. See:

http://www.youtube.com/watch?v=eLbMPyrw4rw&feature=PlayL...

and/or

http://www.youtube.com/watch?v=fV2k2ivttL0&feature=PlayL...

patrickmclaren
These lectures follow Norvig's AI: A Modern Approach. If self studying, I find it helps a great deal to reinforce what you've read through watching these lectures.
This looks like a summary of the first lecture of the Stanford machine learning course on YouTube:

http://www.youtube.com/watch?v=UzxYlbK2c7E

I second the nomination of Bishop. It is the standard text. It is only two years old, and Bishop will teach you machine learning the way that the field practices it nowadays. In my lab of fourteen people, we must have six or so copies of Bishop.

I don't understand what is impractical about Bishop. If you are looking blindly to use an off-the-shelf machine learning implementation, that's one thing. Machine Learning has been described as the study of bias. If you want to understand when to pick certain techniques, and develop appropriate biases, then read Bishop.

"The Elements of Statistical Learning" by Hastie, Tibshirani and Friedman gives more of a statistician's approach. The treatment is simply less broad, and also more dated.

You can also look at Andrew Ng's video lectures: http://www.youtube.com/watch?v=UzxYlbK2c7E He is very well-respected in the field. For certain students, watching a lecture may be preferable to reading a book.

pmbouman
In terms of "datedness," I would point out that the second edition is now out:

This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression & path algorithms for the lasso, non-negative matrix factorization, and spectral clustering. There is also a chapter on methods for ``wide'' data (p bigger than n), including multiple testing and false discovery rates.

I'd be interested to hear what you mean by "simply less broad" as compared to Bishop's book, which (from having flipped through it) looks pretty comparable.

pmbouman
A few other things (sorry, not to snipe too much! :) )

-I'm skeptical of the idea of a single "standard text" in such a fast-moving field. New machine learning techniques appear constantly and are often documented online years before they appear in books. Some computer scientists say they prefer conference proceedings over academic journals because the latter take so long.

-Further, I'm not sure that the goal of any text should be to cover topics X, Y and Z in any case, which doesn't seem possible for a book to do. What does seem feasible is to set up a framework for analyzing the performance of different techniques. So I'd like to hear a comparison of how Bishop does that vs. HTF.

-You're of course correct that HTF takes a statistician's POV on the field - the authors are all professors of statistics at Stanford. They are also accomplished - Friedman was a co-author on CART, for example. I would instead ask the question: what can you get out of the book and the framework it offers?

-I think that part of the framework in machine learning is to think about bias AND variance, and how to trade them off successfully. This is an important part of model selection, for example.

Start leaning from the guy right now: http://www.youtube.com/watch?v=UzxYlbK2c7E

Stanford class on machine learning by Andrew Ng, 20 lectures.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.