HN Academy

The best online courses of Hacker News.

Hacker News Comments on
Supervised Machine Learning: Regression and Classification

Coursera · DeepLearning.AI · 34 HN points · 177 HN comments

HN Academy has aggregated all Hacker News stories and comments that mention Coursera's "Supervised Machine Learning: Regression and Classification " from DeepLearning.AI.
Course Description

In the first course of the Machine Learning Specialization, you will:

• Build machine learning models in Python using popular machine learning libraries NumPy and scikit-learn.

• Build and train supervised machine learning models for prediction and binary classification tasks, including linear regression and logistic regression

The Machine Learning Specialization is a foundational online program created in collaboration between DeepLearning.AI and Stanford Online. In this beginner-friendly program, you will learn the fundamentals of machine learning and how to use these techniques to build real-world AI applications.

This Specialization is taught by Andrew Ng, an AI visionary who has led critical research at Stanford University and groundbreaking work at Google Brain, Baidu, and Landing.AI to advance the AI field.

This 3-course Specialization is an updated and expanded version of Andrew’s pioneering Machine Learning course, rated 4.9 out of 5 and taken by over 4.8 million learners since it launched in 2012.

It provides a broad introduction to modern machine learning, including supervised learning (multiple linear regression, logistic regression, neural networks, and decision trees), unsupervised learning (clustering, dimensionality reduction, recommender systems), and some of the best practices used in Silicon Valley for artificial intelligence and machine learning innovation (evaluating and tuning models, taking a data-centric approach to improving performance, and more.)

By the end of this Specialization, you will have mastered key concepts and gained the practical know-how to quickly and powerfully apply machine learning to challenging real-world problems. If you’re looking to break into AI or build a career in machine learning, the new Machine Learning Specialization is the best place to start.

HN Academy Rankings
  • Ranked #4 this year (2024) · view
  • Ranked #2 all time · view
Provider Info
This course is offered by DeepLearning.AI on the Coursera platform.
HN Academy may receive a referral commission when you make purchases on sites after clicking through links on this page. Most courses are available for free with the option to purchase a completion certificate.
See also: all Reddit discussions that mention this course at

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this url.
Andrew Ng's machine learning course on Coursera is a good introduction to the theory.
I was about to share the same link, really good solid foundation and understanding of what is behind ML
Not sure if it's the same material but there is a course by him on youtube as well:

I've watched every video in that playlist. He is a fantastic teacher.

I am CTO of a startup (and use to be freelance dev). I once did the Introduction into Machine Learning course on Coursera (

Although I would agree with most opinions here that that does not make me into a data scientist by any meanse, I do really like that I have a good "helicopter view" of ML. This is still super benificial in my role today, as I know which kind of statistical models apply to certain kind of problems. This enables me to find the right people for the right solution with much more ease.

I don't think OP meant course completion certificates.
I'm 25 and I studied a computer science BSc. After I graduated I worked as a data scientist for 2 years and now I'm a machine learning engineer at a small start up. I have also been feeling held back from the more senior roles by my lack of mathematics and toyed with the idea of MSc or PhD. I applied to Imperial and UCL Machine Learning MSc on 3 seperate occasions and was rejected from all because of the lack of Maths. In the end it would have cost £16k (course fees for 1 yr) + £26k (loss of earnings for the year after tax) and so I haven't applied since because, although studying those masters programmes would give me a gread mathematical foundation, I can also 100% self-study and save £££.

The habit of studying is the most important habit you can learn at this early stage. If you haven't read it, I strongly advise you read Atomic Habits. It's really helped me. Here are some great mathematics for machine learning courses, some I have taken and some I am currently studying.

Study tips: - Always show up even if you only study for 5 minutes, it's more important to show up than to be perfect - Make studying a daily habit even if you only do it for a short amount of time - Stack studying onto the end of another daily habit, e.g. study after you've eaten breakfast before you start work (this is what I do) - Make studying satisfying (I bought a small calendar and cross of each day I succeeded with a sharpie and it's super satisyfing crossing them off) - Remove distractions by putting your phone on charge in another room - Make notes on new concepts in the form of questions, these can be used later on flash cards to help refresh the material and avoid the effects of the forgetting curve - Write code to see the math in action. Running code is immediately gratifying and makes the value of the maths real and tangible.

- (free, approachable) - (like £10 on sale, very approachable) - (like £10 on sale) - - [textbook, not very approachable until you have some background] (free)

I finished machine-learning[1] long time ago and it's so good. Look forward to this [2].

[1] [2]

The only downside of [2] is that is is taught in Keras + Tensorflow rather than PyTorch.
I am not an expert or industry practitioner of either recommendation engines or anomaly detection, just I meant simple and useful things to add to a website.

Suppose your website has posts and you want to flag posts when they have abnormally high likes because they might be great reading or complimenting your new release. You could collect a dataset of likes after a day, X, of each post. Then calculate mean and variance, fit a normal distribution[1]. Then calculate z such that P(X >= z) = 0.01 (1%). z represents the cut off point at which typically only 1% of posts are above. Then when a post is above z say 1000 likes then you see what all the fuss is about.

I am just talking about applying 16-18 school maths in a simple way, to point out unlikely events. Of course the distribution of likes may not look like a normal curve if you plot (number of posts with x likes against x) so a different distribution may make more sense. It may not be a perfect model but just a quick and dirty thing to try, :).

Personally I enjoyed completing the free Andrew Ng Machine Learning course[2] on Coursera which covers this and quickly training a simple recommendation engine for movies. It also covers multi-variate Gaussian distributions if you want to flag based on more than one criteria. For this course, the maths is relatively accessible and they go over what you may have forgotten so you can pick up maths as you go along.

Of course you can go far more complex if you like but I don't know much about that.

[1] Normal distribution


If you actually want to understand and implement neural nets from scratch, look into 3Blue1Brown's videos as well as Andrew Ng's course.

I actually tried to implement a neutral network from scratch by following 3blue1Browns videos, and using the same handwritten number data set. But I got stumped when I realized I didn't have a clue how to choose the step size in gradient descent, and it's not covered in the videos. Despite that problem I'd say the 3B1B videos are excellent for learning the fundamentals of neural networks.
I found Andrew Ng’s Deep Learning Specialization much better for understanding neural networks than the machine learning course.
I completed Andrew Ng's Coursera course and one thing it did not do is make me understand neural nets from scratch. Probably, you and I have different interpretation of "from scratch".
I agree, Andrew Ng's course is overrated.
I can think of many - I have taken several starting since 2013. The tricky thing is that Coursera classes seem to get merged, re-mashed or otherwise re-branded. And as such only one currently is listed in my "Completed" courses section of my profile.

Having said that and with the caveat that these probably changed since I taken them, I recommend the following:

- Cryptography - - great introduction to the fundamentals and math behind cryptography. A lot of theory but also some practical exercises. This is my top recommended.

- Machine Learning - - a good introduction to the basic of machine learning; focuses on octave/matlab and does not dive into frameworks like scikitlearn or tensorflow

- Introduction to Interactive Programming with Python - - I took a course from Rice University on Python programming through making games that was fun. As far as I can tell, this is the modern incarnation in two parts.

- Software Security - - goes into stack / overflow exploits, tools for testing, and web-based attacks

- Functional Programming Principles in Scala - - this was a good introduction to scala and functional programming - it got me thinking in a different way

- C++ for C Programmers - - I think this was the first coursera class I took. This course dove into the C++ STL and a lot of modern features introduced in C++11.

> as such only one currently is listed in my "Completed" courses section of my profile.

That's surprising to me: wouldn't Coursera want learners to be reassured that whatever signalling benefit there is to completing a course will remain forever?

I took a few courses in 2013 just to see what MOOCs are really like and completed two (Programming Languages, as taken by many here, and Introduction to Mathematical Thinking, which IIRC was mostly about logic) which indeed are not listed under "completed" in my profile. I found them at though.

> I found them at though.

Thanks for pointing that out! I have 11 courses in the accomplishments and just one in "completed" courses.

You could learn python/machine learning.

There are a bunch of guides like the coursera course I mentioned and on youtube there is the official tensorflow channel also!



I also highly recommend ML, not for the experience (though it may certainly come in useful) but because it's surprisingly entertaining.

Last year I had a lot of fun playing with GPT-2 on Google Colab (you can run it yourself on a decent GPU tho). My friend and I were trying to generate the funniest possible output to make each other laugh -- as an aside, this might be the only category where GPT-2 beats GPT-3 hands down :)

Python is also a joy in its own right. xkcd said about Python "programming is fun again!" and that has certainly been my experience.

I second this. With Webdev + Machine Learning skills, you could build interesting products.

I give you an example from my life. I build PredictSalary (, a browser extension to predict salary range from job opportunities which don't disclose salary range.

A browser extension is basically a bunch of JS files injected to web pages. If you know webdev (frontend), you are set to build a browser extension. All you need to learn are machine learning and deep learning. You don't need to dive deep (unless you want to). In my case, basically I just use basic regression to predict salary range.

I have self taught this material and have been working professionally in the field for some years now. It was primarily driven by the need to solve problems for autonomous systems I was creating. When I am asked how to do it I give the progression I followed. First have preferably a CS background but at least Calc 1&2, Linear Algebra, and University statistics, then:

1. Read "Artificial Intelligence A Modern Approach" and complete all the exercises [1]

2. Complete a ML course I recommend the Andrew Ng Coursera one [2]

3. Complete a DL course I recommend the Andrew Ng Coursera one [3]

4. Complete specialization courses in whatever you are interested in such as computer vision, reinforcement learning, natural language processing, etc. These will cover older traditional methods in the introduction also which will be very useful for determining when DL is not the correct solution.

Additionally I suggest a data science course which usually covers other important things like data visualization and how to handle bad/missing data. Also, I learned a lot by simply being surrounded by brilliant people who know all this and being able to ask them questions and see how they approach problems. So not really self taught as much as untraditionally taught.

Unfortunately not a single person has actually followed the advice. Everyone has only watched random youtube bloggers and read blogs. Some have gotten into trouble after landing a job by talking buzzwords and asked for help but my advice does not change.

It does make it rather hard to find a job without a degree though, I would not recommend it. All of mine only come from strong references I have from luckily getting my foot in the door initially.




Edit: formatting, typo

Teaching isn't just about presenting the information to the student.

That's basically all you've done. Here, student, read these complex topics and at the end of it all you will have learned machine learning!

The art of being a teacher is much more nuanced. You (apparently) fail to present the material in a way that is accessible, relatable, and not overwhelming.

For example, the first thing you say to do is go read a college textbook front to back, and do all the exercises. And you're surprised that nobody has followed your steps?

Yes, you are right. I'm not trying to teach the the dozen or so people who have asked, only to lay out a progression with prerequisites similar to what I did for self learning. I certainly do not have time to be creating courses and problem sets or anything more than answering specific questions.

The AIMA book has a lot of open resources around it that I always mention including a full open course I believe, it should all be linked on the site. Although, I also mention that while it is probably not a good idea they can possibly skip it and go right on to the ML course. Both of the Coursera courses are complete with lecture videos and work presented in a very accessible manner including interesting projects.

Coursea is bullshit.
Thanks for sharing your learning path and congrats to your success. There are so many great resources out there that it's possible for anyone to become an expert. Unfortunately, people like you who are willing to put in the hard work seem to be the minority. All of your success is well-deserved and props to you. Just like you said, most gravitate towards the easy-to-understand videos and blogs instead of confronting their gaps in knowledge. I've had the same experience with giving advice - anything that looks like it requires focused work (solving textbook problems!) or is unsexy is readily ignored in favor of the latest hype demo or high-level framework.

> It was primarily driven by the need to solve problems for autonomous systems I was creating.

I wonder if your success also had something to do with the fact that you had a specific problem you were trying to solve. Did you feel that your specific problem put everything you were learning into context and made it easier to stay motivated?

> Did you feel that your specific problem put everything you were learning into context and made it easier to stay motivated?

Absolutely, having focus on finding better solutions to a single problem for multiple years certainly helped putting everything into context and staying motivated. That is the biggest problem I think, really learning this stuff like is needed to in order to solve new problems takes years of investment and just can't be done in a week or a month. Pretty hard to stay focused on something for that long without a goal and support of some sort.

The way that having the specific long term problem to solve really helped was always having that thread spinning in the back of my mind thinking about how something new could be applied to solve part of it and possibly trying it out. Also thinking about if certain approaches could even be practical at all.

I suppose that is probably fairly similar to grad school though.

> I wonder if your success also had something to do with the fact that you had a specific problem you were trying to solve.

As a side note, and I've found that this is one of the best ways to learn. Find a hard but obtainable problem and work towards it gathering all the knowledge you need along the way. What works for me is breaking down a project into a bunch of mini projects and so it becomes a lot easier to track progress and specify what I need to learn. That way even if you don't finish the project there's still a clear distinction of what you learned and can do.

I'm highly in favor of project oriented learning.

I completely agree with this. In undergrad, I majored in non-profit management. Every single course in major had a field work requirement. Grant writing class required us to work with an area charity and write them a grant application, which our professor graded as one of our assignments. Same thing for program evaluation class and the rest. In addition to learning the topics, I learned so much about how to work with real world teams.
I have completed both of the following:



I highly recommend both of them, if they cover a subject you're interested in, though if you only have time for one, they are listed in descending priority order.

Given how low it's time+energy requirements are, and how large the pay-off has been, I recommend Learning How to Learn by Dr. Barbara Oakley to everyone regardless.

I did Learning How To Learn twice and I still not sure how helpful it is. The only thing I found useful/applicable is spaced repetition.

What part of the course do you feel it's valuable? I don't mind going back to do it again.

The focused vs diffuse thinking part was very useful for me.
How do you apply this? Let's say we want to learn a new programming language. I should spend time focus on a topic, then how do I apply diffused thinking here?
I learned to value physical activity for diffuse thinking, and related to it when I unconsciously feel the urge get up and walked around the room when thinking about some subjects I've learned.
A meaningful part of what I found useful was that I knew ~<10% of the content. It was 8 hours effort (Tuesday+Thursday 1 hour lunchtimes for four weeks, including all videos, quiz's, note taking (on the phone I was watching it on), and tests), and I'm now sure I know how to study effectively.

I think one working day is worth it to make sure you're up to date.

I also found the working memory, and memory information in general very informative and helpful. with Andrew Ng
not that good of a course, from someone who already did it
Care to explain why? Anyone else who did it care to chime in?
It was good like 10 years ago and it did age well but it's a little bit outdated on the video/audio quality and the tools and algorithms you learn about. I think it's surprisingly up to date for a fundamentals course that old, but still a bit outdated.
I'm doing it along with Ng's newer courses at the moment and I really like that he focuses on all the basics mathematically as well and not only conceptually which gives you a deeper understanding machine learning imo. However as others have said, the audio quality is subpar and personally I find it hard to motivate myself for the programming challenges in Octave. So my suggestion would be to just view the videos and take notes and then do the newer courses and their challenges.
Due to the low quality of the video and audio I honestly struggled to want to go through the material.
I second that - great content, but terrible audio / video quality, unfortunately, which makes it less enjoyable and a bit of a struggle tbh.
It requires Matlab for instance.
back then, matlab was the thing
actually, Matlab is still the thing depending on the domain you are working with. I don't get the hate towards Matlab generally from CS people. Maybe because it's paid?
Rather it is because it is a poorly suited language, in that isn't aware of modern programming approaches.
I completed the course with Octave, but yeah this language is a hurdle that people don't need.
I took it a few years back. It was an introduction to ML course.
I took it several years back and enjoyed it. I liked that the course had you implement the whole training pipeline yourself rather than using a framework (not sure if the newer class does the same). While you would likely not do this in practice, I felt it helped my intuition when using the frameworks since I had a sense of how the internals were working.
It's the course that launched Coursera (formerly, and is still one of the most highly rated on Coursera, so I dare say that you are in the minority with that opinion.
I would recommend Andrew Ng's updated course on Deep Learning with python instead.
As someone that's new to ML but interested in it, do you recommend people skip the original course? Does it cover the same things?
I took it about 6-7 years ago, so I totally believe you
Yeah - the updated version is much better (I've completed both of them) just because you don't need to struggle with Matlab.

Overall, this course is extremely good mostly because Ng covers the essential theoretical topics and gives some practical advice. Also, the topics are explained really well and you do not need to look up additional material. Also, I really appreciate that he took the time to derive those equations while others just drop the results.

I'm started the Deep Learning course last night and I too think it's really good. After you finished the series of courses, what did you move on to?
ML engineer here. I didn’t take any ML classes in college and picked up most of what I know on the job.

I think this advice is directionally correct - reading through a theory-dense textbook like Bishop, which many consider to be a foundational ML textbook, is likely to be a bad use of your time. However, I think it does help to start with some theory, if only to give you the vocabulary with which to think about and get help with issues that you run into. At the risk of sounding like a broken record, Andrew Ng’s class on Coursera ( is quite good - it’s accessible with a bit of basic calculus knowledge (simple single variable derivatives and partial derivatives are all you need) and basic linear algebra (like, matrix multiplication). The whole class took me around 30 hours to get through, so if you’re determined, you could probably finish it in 2-3 weeks even if you’re pretty busy.

Also, if you like having text notes to refer to, I made these notes for myself a few years back when taking the class: There are some spots where, for my own understanding (I’m a bit of a stickler for mathematical rigor), I added more of the reasoning/equation pushing that Ng glosses over in his lectures. I would say that for a practical understanding of how to apply the concepts covered in the class, there’s no need to read those parts carefully (there’s a reason why Ng glossed over them).

But yeah, to all the people saying you should start by reading entire textbooks on multivariable calculus, statistics, and linear algebra...that’s not necessary. Most ML engineers I’ve met (and even most industry researchers, although my sample size there is much smaller) don’t understand all of those things that deeply.

Also, one last semi-related note - if you’re reading a paper and get intimidated by some really complex math, oftentimes that math is just included to make the paper look more impressive, and sometimes it’s not even correct.

I started with with the machine learning course[0] on Coursera followed by the deep learning specialization[1]. The former is a bit more theoretical while the latter is more applied. I would recommend both although you could jump straight to the deep learning specialization if you're mostly interested in neural networks.



From a professional perspective, does it make sense to get on a train that is so crowded already? Step 0 is probably to take Andrew Ng's on Coursera, but as of right now, you'd be among "2,647,287 already enrolled!" [0]


> but as of right now, you'd be among "2,647,287 already enrolled!"

* Enrolling yourself in a free online course, does not, at all make you a ML expert. A very sizeable portion of those enrolled may not have gone further than the first chapter.

* The ML train (as in people actually knowing ML) is not, at all, crowded

* Even if the train was crowded, learning ML does not make you forget what you already know. Saying "I don't want to learn this skill because so many people already have it", just means there are that many people that now have one more skill than you (unless you decide to allocate this time to learning something else)

> learning ML does not make you forget what you already know

I often feel that learning something new takes such energy and mental rearrangement that it does crowd out what I already "know". The brain is a neural network whose weights are constantly being adjusted, just because you learned something at one point does not mean it is permanently there.

For example, I spent many years working in networking, but now that I've been out of that field for several years and working in embedded firmware and data, I would have to relearn much of what I "knew" in my old field in order to be professionally productive. And that's apart from the field itself advancing in ways I haven't kept up with.

It's like riding a reverse-steering bicycle. By learning something that's in direct competition with your existing neural structures, your neural structures change and you no longer "know" what you used to. Jump to 5:10 to see this person, who took 8 months to learn to ride a reverse bicycle, try to ride a straight bicycle:

> but now that I've been out of that field for several years and working in embedded firmware and data, I would have to relearn much of what I "knew" in my old

This seems to me this is mostly caused by being out of the field for so long, not because you did something else while being out.

That seems like a distinction without a difference. The reason time erases skills is because you're always doing stuff, and over a longer period of time, you've done more stuff, which has reprogrammed your brain.
You have to be joking. Demand for ML skills, especially scaling in production, is off the charts.
especially scaling in production,

The overwhelming majority of people studying ML at the moment, across all the various course types, are developing little or no skills in this area. Which is one of the problems - there is an oversupply of talented junior people with some modeling experience and a rough idea of how ML training can work in practice and little else. Which covers what, 10% of the work needing to be done? There is a huge under supply of people with significant production experience, or real skills across the broad range of things that make up a useful ML pipeline.

Scaling ML in production doesn't take much ML skills though.
The bulk of industry ML work isn't actually suited to ML phds without engineering skills. I work in FAANG and this is a huge problem where the ML phds have poor communication with skilled engineers and a lack of engineering experience. They often even look down upon people who don't have fancy credentials. Unfortunately they just end up creating a money wasting disaster of a system.
I'd guess they have a very high drop-out rate.

Regardless, "machine learning" is a very broad field and honestly I have no idea what an "ML engineer" is doing if they are one. It can cover any of the following:

1. Cutting-edge academic research (do better on this test set)

2. Doing data analysis to identify prediction ability

3. Creatively thinking of useful features to evaluate.

4. Implementing data pipelines/logging to obtain the features needed for #3.

5. Production systems to evaluate/train ML systems. (multiple places in the stack).

Because the spectrum is so wide, if you are already an engineer, you can readily get into "ML" categories 3-5 and even 2. Andrew Ng's course is a valuable introduction and not that heavy of an investment -- I found that just with it (alongside my product and infra background), I could readily contribute to ML groups at my company.

>> 1. Cutting-edge academic research (do better on this test set)

It's interesting you put it this way. I think most machine learning researchers who aspire to do "cutting-edge" research would prefer to be the first one to do well on a new dataset, rather than push the needle forward by 0.005 on an old dataset that everyone else has already had a ball with. Or at the very least, they'd prefer to do significantly better than everyone else on that old dataset.

I bet you remember the names of the guys who pushed the accuracy on ImageNet up by ~11%, but not the names of the few thousand people who have since improved results by tiny little amounts.

That's good you got exposure to Python (and, I assume, numpy/scipy/pandas etc.), and you're already familiar with R. Are you majoring in data science, and just looking for something extra?

If you're interested in machine learning, Andrew Ng Coursera course is almost a right of passage at this point - it's very accessible:

Kutner is the bible on regression models (tho, not a super fun read):

This was one of my favorites as an undergrad:

I’ll check these out, thanks for the recommendations. I’m not majoring in Data science, but I do find it really interesting and want to learn more.
Not the parent, but I did an undergrad maths/physics degree some time back and found to be good as an introduction, unfortunately a new job [unrelated] has prevented my finishing the course but I hope to pick it up again later in the Winter.

I would be interested in thoughts from anyone with ML experience who has reviewed said course's materials?

I learned a ton from the linear algebra section of the Stanford Machine Learning course on Coursera:

I'm biased, because I took the course when it was called "ML Class" in 2011:

Everything is done in Octave (ie - open-source matlab-like language); primitives are vectors and matrices - so you'll have to wrap your head around that.

But that course gave me the first explanation as to how neural networks actually worked that I could understand; I had been reading about neural networks for years from various sources - books, online, videos, etc - and nothing ever "clicked" for me (mainly around how backprop worked). For some reason, this did it for me.

Since then, I have taken other MOOCs centered around ML and Deep Learning, mainly with a focus on self-driving vehicles.

Oh - ML Class also led one individual to implement this during the course, as the ALVINN vehicle was mentioned in more than a few ways:

While Singleton does mention its "vintage-ness", I still think it's a sound project for inspiration and learning how to apply a neural network to a simple self-driving vehicle system, not to mention the fact that it replicated a system from the 1980s using today's commodity hardware; I recall reading about ALVINN when I was a kid, with wonderment about how it "worked" - it was one of several 1980s projects in the space that got me hooked on wanting to learn how to make computers learn.

If you want a great introduction, check out Andrew Ng's Coursera course:

This is the same course that was originally called "ML Class" in 2011 when it was offered via Stanford (it, plus the other course called "AI Class" - helped to launch Coursera and Udacity, respectively). It uses Octave as it's development language to teach the concepts.

Octave has as it's "primitives" the concept of the Vector and Matrix; that is, it is a language designed around the concepts of linear algebra. So, you can easily set a variable to the values that make up a vector, set another to say the identity matrix, multiply them together, and it will return a matrix.

But the course is initially taught showing how to do all of this "by hand" (much like this tutorial) - using for-next loops and such; essentially rolling your own matrix calcs. Then, once you know that, you are introduced into how to use the built-in calcs provided by octave, etc.

It is really a great course, and I encourage you to try it. It opened my eyes and mind to a lot of concepts I had been trying to grasp for a while that just didn't click until I took that course in 2011.

Of course, once it did, I was like "that's it? why couldn't I see that?" - such is the nature of learning I guess!

His method of building up from the basics all the way to a neural network using Octave will be a good foundation for further learning or playing with other tools like TensorFlow; I ended up taking a couple of other later MOOCs offered by Udacity (focused mainly on self-driving vehicle concepts), and ML Class (and what I took of AI Class) really helped to prepare me.

It also showed me things I was deficient in (something I need to correct some day). But that's a different story.

Suffice to say, that course will "unveil the magic"; it was really an amazing course that I am glad I took. Oh - and to show you what you can do with what you learn in that course, a fellow student during that time ended up making this (and the course was not even finished when he did it):

Throughout the course, the ALVINN autonomous vehicle was referenced quite often, and it inspired him to recreate it successfully in miniature. It also serves to show just how far computer technology has come - what used to take a van-load of equipment can now be played around with using a device that fits in your pocket!

This is a great course, I took it once a few years ago and sometime think if I should take it again. Wonder how it changed over time. It was eye-opening training a small NN by hand on MNIST dataset and getting it to recognize digits surprisingly well.

However even understanding all that doesn't allow one using modern frameworks by itself, they seem to carry too much additional assumptions, terminology which aren't quite explained with similar level of patience.

This. I can tell anybody thinking about taking this course, do it! The way Jeremy Howard explains things is so down to earth and clear, you'll be surprised at how easy all this ML/DL nonsense is :)

But seriously, there's a lot to learn here, and if you are looking to make your entry into the hype field of the decade I would strongly suggest starting here first

(Alternatively, for the more math inclined, perhaps take Andrew Ng Coursera course first:

I second the Andrew Ng Coursera course; I took it in 2011 when it was called "ML Class" (pre-Coursera) - yep, I was one of the beta testers!
It's worth noting that the popular Coursera course "Machine Learning" taught by Andrew Ng uses Octave as its tool. This is mentioned in the Octave book's preface; see if you're interested in the course itself. It is possible to use MATLAB instead, but there's often no real reason to do so. I think a lot of people have already decided to use Octave instead, because it solves their problems just fine.
I've heard good things about Andrew Ng's Coursera machine learning class, and it's free:

There is a free intro to data science class on Udacity as well:

Lastly, there's always Kaggle, which has plenty of resources to learn from, and competitions as well:

I can vouch for Andrew Ng's class. Its really top notch.
I enjoyed Andrew Ng's class on Coursera, which introduces basic concepts. For "real world," maybe find a dataset on and try to figure out how to answer a question that is interesting to you?

I'm going through this one right now, and really enjoying it. Being able to learn at my own pace is great. The course is math heavy, but that's to be expected. I'm enjoying using Octave for the programming.
Understanding the notation is only the first little step in understanding the underlying principles. There are a lot of machine learning books/classes that will cover the underlying principles you need to understand what's covered in the book. Exactly which one to recommend depends on your background, but Andrew Ng's Coursera Course [1] has been a staple for people without a lot of math background.


It's the first little step but you can't discount it, I struggled through my CS degree being "bad at symbols" until I found the book in the top comment and things got significantly easier, because even the mathematical courses that don't expect much maths background are taught by people who do have that background and often fall back on those symbols and the vocabulary they've got the experience with.

Honestly it's weird getting this far without knowing the symbols well, I went to a significantly poorer and less well staffed school that most of my fellow students, and you're kind of between a rock and a hard place sometimes, because no-one sees it as their job to catch you up, and the people who do know it learned it long ago manually and don't exactly have an easy time teaching you if you ask.

+1 for Andrew Ng's ML course.

I did the course earlier this year, and I can confirm that you don't need much more than high-school level maths knowledge. If you understand concepts like functions and summation then you're most of the way there, and if you've got some calculus then it should be easy. I came out of it with better mathematical comprehension than when I went in.

I found that I spent a lot of time on the course converting mathematical notation into code (octave/matlab), which is a great way to validate your understanding of the maths. If you're understanding is wrong then it either throws an error, or runs slowly because you failed to map summations etc onto the appropriate parallelised operations.

ML has moved on a bit since the course was designed, but it's still a good way to get familiar with the basics.

I tried to overcome the notation issue by taking a ML class as a non- degree seeking student from the CS dept of a good university. I took it for credit, so invested a lot of time on hw and paper reading. I’m more comfortable with the notation now and am better at learning independently. I also did calc, linear algebra and Differential equations classes at a community college prior to this.
Ng's Coursera course will teach you what happens in Deep Learning and Machine Learning, but at least the Deep Learning course is very very light on the math side and avoids scary mathematics rather than making it accessible.

Without having read it, I'd recommend the "preliminary maths" part of Bengio et al's Deep Learning book - it teaches both the letters and the language, so to speak, and if the language isn't for you, you'd better throw away the papers and completely concentrate on reading and understanding the implementations that are our there, using the implementation first and foremost and the paper only as a backup to provide explanations when the implementation does mysterious or unexplained things.

You can do deep learning productively without having a PhD, but you won't be able (nor obliged) to read and understand PhD-level academic papers unless you have a solid (i.e., maths or physics or math-rich CS BSc) maths background.

Some attribution seems to be there in the beginning of the README

> "In most cases the explanations are based on [this great]( machine learning course."

EDIT: It was added 6hrs ago

Yes, links to the Andrew Ng course is on every README of the repo: on top of the main and in the "References" section of each internal
Here are some others that are also commonly recommended on HN:

Andrew Ng's Machine Learning MOOC:

Andrew Ng's Deep Learning Courses:

David Silver's Reinforcement Learning Youtube Series: Courses on ML and Deep Learning:

Thanks for sharing!
Andrew Ng shares his impressive knowledge on Coursera (which he is a co-founder of). For those interested:

- (Machine Learning Course)

- (Deep Learning Specialization)

I didn't find the machine-learning course to be that great, but I don't know anything about machine learning.
It's only the best resource it there
Additional perspective: I consider it to be the best online course that I've taken. (I've taken a dozen or so.)
I thought both of these two courses on coursera were quite good:

First one is a bit older school, but takes you through all the fundamentals and actually explains a lot of the math involved. It also gets you thinking a lot more about how to solve problems from a Linear Algebra standpoint and the types of problems machine learning is good for tackling.

Second one is a much more modern day set of courses specifically focused on Deep Learning techniques and problem solving.

I thought both were great. First one is free as well...

Introduction to Mathematical Thinking ( Aims to teach you what it is like to think like a mathematician. Covers the elements of topics that you probably encounter in the first semester of an undergraduate maths degree: logic, induction, proof construction, real analysis, etc.

Machine Learning ( I'm still working through this course but am finding it extremely interesting. I find that having to implement things in matlab/octave gives you a deeper understanding than using a framework like tensorflow or keras.

Both of the above courses have good instructors, which I think is the main factor that makes a good mooc.

The "Bitcoin and Cryptocurrency Technologies" on Coursera helped me gain an understanding cryptocurrencies. Until I took that course I knew very little about the subject.

It's possibly a little dated now, but it's a good primer.

I'd love to hear what other cryptocurrency courses others recommend.

As many others mentioned, Andrew Ng's course on Machine Learning on Coursera was also very good.

Not a cryptocurrency course per se, but Dan Boneh's course on Cryptography[1] is an excellent introduction to most of the building blocks of cryptosystems, including the technology underlying most cryptocurrencies.

In terms of level, it is more than a little technical (programming exercises in both cryptography and cryptanalysis await you!), while still remaining far from rigorous (compared to, say, a graduate-level cryptography text).


Second this. Doing the homework assignments (especially the first one) helped me understand how transactions work and are validated.
You might appreciate the Digital Currency MOOC from University of Nicosia, with Andreas Antonopoulos and Antonis Polemitis.

Latest live stream (with Andreas Antonopoulos) goes into Ethereum and alternative uses of the blockchain:

Edit - Actually, the latest live stream is this one:

Figured I'd leave both, since they cover similar topics

very interesting
Andrew Ng's Machine Learning Course on Coursera:

I'd suggest starting with this excellent course:

And then dive into competitions on

Then following up with the specialization here:

and the courses.

You'll be up to speed in no time :)

This looks like a well put-together course, and a good way to learn TensorFlow. Keras and TensorFlow are top of my list of technologies to explore in the very near future.

Is anyone here doing Andrew Ng's Machine Learning course [1]? I'm about half-way through and really enjoying it. I'm particularly appreciating that the programming exercises are done in MatLab/Octave, so I feel that I'm really understanding the fundamentals without an API getting in the way, and developing some good intuition. Obviously frameworks are the way to go for production ML work, but I wonder whether ML people here think this bottom-up approach is advisable or could it be misleading when I move on to Keras/TensorFlow/whatever?


Edit: brevity

Andrew Ng’s older machine learning MOOC class was excellent. I took it once, and took it again a few years later. In the last 8 months I also took his new deep learning set of courses. All really good stuf! (I have been working in the field since the 1980s, but constantly refreshing helps me. And Andrew’s lectures are great fun, he is a teaching artist.
I used to download all ML videos for offline ad hoc watching. But, this crash course videos cannot be downloaded using 'youtube-dl'. Any recommendations?
Use the time you spend trying to find how to download the videos actually watching them, doing the readings and practising the exercises.
Where are the videos anyway? I don't see any play button or ay console to play the videos.
i completed the andrew ng course recently, and felt that the difficulty dropped a lot in the second half of the course (for example, he stops giving homeworks). im hoping for more in his new DL courses
I've taken the CNN and RNN (parts 4 and 5) classes of his new DL specialization, and they both area about as rigorous as you wanted. I do have to give a warning though that the last class starts so see some confusing mistakes in the HW. For example, the expected output given is from an outdated HW version.
sigh. well i hope the support is good.
I teach ML and am currently writing my 2nd book on it.

I always advocate learning the fundamentals. Machine learning is math, and neural networks in particular rely on linear algebra and vector calculus. (You can build a NN without using linear algebra directly, likely it'll be slower and besides, the concept still relies on linear algebra).

Frameworks abstract away a lot of the mathiness, which is a net good for society (ie, exposing lots of developers to neural networks), but I consider that a net-negative for the individual developer.

When working on anything but trivial toy problems, you should make sure you understand your problem domain and implementation thoroughly. Is the activation function you've chosen ideal for your problem domain? If not, choose a better one. If no better one exists, you can invent it; but you'll also need to know how to design the backpropagation algorithm for that new activation function (which requires some vector calculus).

Learning the math, as you have, helps you tune your algorithm based on actual knowledge rather than guesswork. I don't think it will be misleading when you move on to a framework. The frameworks are built on the same math.

That said -- if all you're looking to do is play around, then you don't need the math as much.

Thanks for taking the time to write such a comprehensive reply - much appreciated. "ML is maths" is something that I'm getting used to now. I do have some real uses in mind for what I'm learning' both in my job and some side projects, particularly image feature recognition, and I'm looking forward so seeing how it all works in out. Thanks again!
Image feature recognition is not quite solved but I feel it's very close. It's easier, obviously, if the problem domain is very specific.

In the past, like when I started on ML, the best tip was to make sure to do some edge detection with a few convolutions before feeding an image to a neural network. Now, we have convolutional neural networks that kinda do that for you automatically.

Sometime in between those two dates, someone figured out how to get the convolutions trained via backpropagation -- and they did that by deriving the gradient of an arbitrary convolution (or more likely, looking it up). And that let us put convolutions right in the neural net and have the convolutions automatically train themselves along with the rest of the network. And we observe that the convolutions do things that we would do, like remove unnecessary detail and highlight edges or exaggerate colors.

Anyways; I believe the current state-of-the-art for generic image feature recognition is an ensemble of convolutional neural networks. I believe Google leads the pack on the commercial side so maybe look into how they do it.

not quite solved is the right way to put it.

If you look at capsules papers, you will realize that convnets are not very good at recognizing transformations (e.g. 3d rotations) of the same object. That's probably why so many training examples are required to make them work well.

Also, if you look at errors made by state of the art models, some of them are obvious (to a human) objects, classified as something entirely different and unrelated. Which leads me to believe that object recognition is not completely solved until a model has some kind of common sense, either build in, or acquired during training.

Good post. I want to add several great resources.

- Grokking Deep Learning

This is a fantastic book that assumes no prerequisites other than knowing python, and takes you through the fundamentals of DL. It has very intuitive and easy to follow explanations, and doesn't use any libraries other than NumPy, so you're building the whole thing yourself, from scratch.

- Deep Learning With Python

This is kind of the opposite of the previous one, it doesn't go into math and theory, instead it guides you through building several practical projects with a very simple to use DL library(keras). It's a great way to gain practical experience in addition to theory from the previous book. Also has no prerequisites other than python, and makes it very easy to get started.

- 3blue1brown videos on neural networks:

Extremely brilliant high-level concise overview of how ANNs work. I highly recommend you get started here. You should also check out his videos on calulus and linear algebra, they're fantastic way to learn the math you need.

- Khan Academy videos - one of the easiest ways to learn the math prerequisites.


Linear Algebra:

Probability and Statistics:

- Hands-On Machine Learning with Scikit-Learn and TensorFlow

I haven't read this one yet, but it looks very promising, and a lot of people seem to find it very useful.

- Andrew Ng's Coursera course

Everyone knows about this one, I just think every article on AI resources should mention it, one of the most popular ways to get started with ML.

- New MIT courses on Self-Driving cars and AGI

- The Master Algorithm

Excellent high-level overview of ML field and algorithms.


Other great stuff:

- Artificial Intelligence: A Modern Approach

The leading textbook in Artificial Intelligence. It's not the fastest way to get started, but it's considered one of the best AI textbooks ever written.

- Stanford AI course (CS 188)

Brilliant course based on AIMA. Not DL, but solid fundamentals of AI and ML.

- Couple of great playlists on DL, just to complete the collection:

Machine Learning with Python

Neural Networks Demystified

Yes, see:

It starts out reaaaally basic but give a thorough grounding of the maths and the intuition behind it.

This is kind of a masters degree course i created for myself to get knowledge of Machine Learning from bottoms up

First, you need a strong mathematical base. Otherwise, you can copy paste an algorithm or use an API but you will not get any idea of what is happening inside Following concepts are very essential

1) Linear Algebra (MIT ) 2) Probability (Harvard )

Get some basic grasp of machine learning. Get a good intuition of basic concepts

1) Andrew Ng coursera course (

2) Tom Mitchell book (

Both the above course and book are super easy to follow. You will get a good idea of basic concepts but they lack in depth. Now you should move to more intense books and courses

You can get more in-depth knowledge of Machine learning from following sources

1)Nando machine learning course (

2)Bishops book (

Especially Bishops book is really deep and covers almost all basic concepts.

Now for recent advances in Deep learning. I will suggest two brilliant courses from Stanford

1) Vision ( )

2) NLP (

The Vision course by Karparthy can be a very good introduction to Deep learning. Also, the mother book for deep learning ( )is good

hey neel8986, I know linear algebra is very important for large scale calculations. But how much calculus and statistics do you need for ML? Also, if you can touch what applications of calculus and statistics are used in ML that would be awesome :]. THANKS!
Regarding calculus, I think basic multivariable calculus can be enough for starting. If you need a refresher you can look for (

Also the basic idea of chain rule is important for deep learning.

Regarding statistics, I already mentioned the probability course which describes most of the important statistics concept you need. Also, some idea of Hypothesis testing can be helpful

right on thanks neel8986.
getting your feet wet

andrew ng's machine learning course:

to get up to date on convnet architecture

Fei-Fei Li and Karpathy's cs231n:

if you want to go deep

geoff hinton's neural networks for machine learning coursera:

Brushing up on your math will definitely help! But it's not strictly necessary. It really depends on how deep you want to go.

For the Neural Network type stuff we used in this work, I would recommend Michael Nielsen's awesome website as a starting point and keras is the easiest NN library to pick up and get something going with. Andrew Ng's mooc is also a good starting point for some slightly more general machine learning background.

If you want to avoid the math (not my personal recommendation), I would start there and just mess around. Build small things and start reading more as you get comfortable. Its definitely an area I would encourage learning in an iterative way: try to learn a small amount, try and apply it, repeat.

Here are the resources I found useful: ========================================== Advices from Open AI, Facebook AI leaders

Courses You MUST Take:

Machine Learning by Andrew Ng ( /// Class notes: (

Yaser Abu-Mostafa’s Machine Learning course which focuses much more on theory than the Coursera class but it is still relevant for beginners.


Neural Networks and Deep Learning (Recommended by Google Brain Team) (

Probabilistic Graphical Models (

Computational Neuroscience (

Statistical Machine Learning (

From Open AI CEO Greg Brockman on Quora

Deep Learning Book ( ( Also Recommended by Google Brain Team )

It contains essentially all the concepts and intuition needed for deep learning engineering (except reinforcement learning). by Greg

2. If you’d like to take courses: Linear Algebra — Stephen Boyd’s EE263 (Stanford) ( or Linear Algebra (MIT)


Neural Networks for Machine Learning — Geoff Hinton (Coursera)

Neural Nets — Andrej Karpathy’s CS231N (Stanford)

Advanced Robotics (the MDP / optimal control lectures) — Pieter Abbeel’s CS287 (Berkeley)

Deep RL — John Schulman’s CS294–112 (Berkeley)

This list is solid, and could keep you busy for a few years.
I am sitting in the same boat. Being a web developer for a couple of years, I wanted to try out a different domain. So I started to take Andrew Ng's course on Coursera [0]. Highly recommended. I supplement the course with audio and text by listening to the Machine Learning Guide Podcast [1] and by reading The Master Algorithm [2].

In addition, I started to apply my learnings in JavaScript [3]. Even though it's not the best language for ML, it makes it simpler to learn only one new thing and stick to known technologies for the rest. I have lined up ~7 articles about ML in JavaScript, so if you are interested, you can keep an eye on it :)

- [0]

- [1]

- [2]

- [3]

Intro to AI

Machine Learning

The Pacman programming exercises in python

And the Kaggle Titanic Survivability dataset

But if you desire an even gentler intro. Try Daniel Shiffman's Nature of Code in P5

best of luck ;)

I highly recommend Andrew Ng's Coursera courses for both Machine Learning and Deep Learning. Good for beginners, Math is taught along with the course, and gets you a solid foundation:

Thank you! Should I start with the Machine Learning one?
At your level yes, I would recommend starting with the ML course. It is really beneficial to understanding how the mathematics work.

The two most important things to remember, since the courses are challenging: 1) don't be in a hurry, and 2) don't give up! Take the time to learn every detail presented, do the optional exercises, and dig deep.

It's definitely challenging. The math and just seeing the complicated formulas really push me, but the reward is good too. I'm tired of pushing pixels and doing some meaty stuffy like ML is a nice change of pace.
I would recommend starting with deep learning first since that's what you are interested in and it covers all the ML principles you need to be familiar with. If you want to go deeper and get familiar with other ML techniques too you can easily follow the old course afterwards.
From my experience Andrew Ng wiped the floor with every other lecturer I've had. Both the ML and his new Deep Learning course.

If the lecturers aren't very interesting Coursera can be as hard as any other lectures. I gave up on the Scala functional programming and disappointingly have stalled with Geoffrey Hinton's Neural Networks courses.

But I really can't understate how good Andrew Ng is, he has a very relaxed manner and manages to make some very complex topics seem almost trivial.

The worst of the mathematics is derivatives and matrix multiplication. You can even avoid matrix multiplication mostly in the ML course, but in his Deep Learning course he takes you through the 300x performance benefit you get from using NumPy and matrix multiplication vs loops.

Andrew Ng's courses are excellent. Another pretty good Coursera course is Machine Learning Foundations from the University of Washington. It is very high level and novice friendly. While it covers non of the math and very little programming it does give a nice quick introduction to the most popular ML techniques out there and when to use them. It all depends on what level you are interested in starting. They also have follow up courses that go deeper into the different techniques.
+1 for this recommendation.

I would specifically recommend Machine Learning Foundations: A Case Study Approach - It is fantastic and helped me greatly start my ML journey last year.

Turi is awesome, I hope Apple is doing something great with it.

What? No. Why in the world do people even ask this kind of question. To a first approximation, the answer to "is it too late to get started with ..." question is always "no".

If no, what are the great resources for starters?

The videos / slides / assignments from here:

This class:

This class:

This book:

This book:

This book:

These books:

This book:

This book:

These subreddits:

These journals:

This site:

Any tips before I get this journey going?

Depending on your maths background, you may need to refresh some math skills, or learn some new ones. The basic maths you need includes calculus (including multi-variable calc / partial derivatives), probability / statistics, and linear algebra. For a much deeper discussion of this topic, see this recent HN thread:

Luckily there are tons of free resources available online for learning various maths topics. Khan Academy isn't a bad place to start if you need that. There are also tons of good videos on Youtube from Gilbert Strang, Professor Leonard, 3blue1brown, etc.

Also, check out Doing Kaggle contests can be a good way to get your feet wet.

And the various Wikipedia pages on AI/ML topics can be pretty useful as well.

Not a book but excellent primer course

Everybody point towards this course. I started this few times but was never able to complete it due to commitment at work. Is there a guide/tutorial which gives a more gentle introduction to these topics and could be finished in a day or two?
You can also dive in first and then cover the math behind ML, by taking Andrew Ng's courses.
I consider Andrew Ng's ML course - the best place to start, in addition to his courses.
Nowadays, there are a couple of really excellent online lectures to get you started.

The list is too long to include them all. Every one of the major MOOC sites offers not only one but several good Machine Learning classes, so please check [coursera](, [edX](, [Udacity]( yourself to see which ones are interesting to you.

However, there are a few that stand out, either because they're very popular or are done by people who are famous for their work in ML. Roughly in order from easiest to hardest, those are:

* Andrew Ng's [ML-Class at coursera]( Focused on application of techniques. Easy to understand, but mathematically very shallow. Good for beginners!

* Hasti/Tibshirani's [Elements of Statistical Learning]( Also aimed at beginners and focused more on applications.

* Yaser Abu-Mostafa's [Learning From Data]( Focuses a lot more on theory, but also doable for beginners

* Geoff Hinton's [Neural Nets for Machine Learning]( As the title says, this is almost exclusively about Neural Networks.

* Hugo Larochelle's [Neural Net lectures]( Again mostly on Neural Nets, with a focus on Deep Learning

* Daphne Koller's [Probabilistic Graphical Models]( Is a very challenging class, but has a lot of good material that few of the other.

Andrew Ng's tutorials[1] on Coursera are very good.

If you're into python programming then tutorials by sentdex[2] are also pretty good and cover things like scikit, tensorflow, etc (more practical less theory)

[1] [2]

For those who are unfamiliar with coursera or interested in just the videos (and NO certificate) you can enroll in "AUDIT" mode:


The deep-learning course consist of 5 subcourses:

The deep-learning course is a different course than the prerequisite machine-learning course:

forgotmysn also has a quality ML course
I love's course, but I also appreciate having a solid and thorough free machine learning course. While is good for getting onto the cutting edge of deep learning quickly, it doesn't go through a lot of material that most people in the machine learning industry are assumes to know.
Can the certificate be received by completing the course in audit mode and then paying at the end of the course.
Thanks for the links! It seems like you can already browse the lecture videos of the first three courses!
BTW, if you want to audit, you need to search for each course individually (or click seycombi's links above) and click the Enroll button there, there's no Audit option if you click Enroll on the full Specialization.
Thank you for pointing this out! Was looking for the way to just audit it and could not figure it out.
This one appears to be (but it's also not available for enrollment just yet). Then, for example, his ML course ( is available for purchase:
The courses are not yet live. Meanwhile, here is a list of 23 Deep Learning online courses aggregated (aggregated by my company):

Also, link to Andrew Ng's original ML class:

Anybody have a clue when these will actually be up? I really enjoyed his ML course.
For the first three courses the first week is already accessible.
Andrew Ng has an excellent ML course on coursera [1] that is very light on math.


May 29, 2017 · 9 points, 0 comments · submitted by aarohmankad
Yes, I did my research but there is no such interactive tutorial online like Treehouse or Codecademy. There are so many tutorials but none of it tells you the whole path.

Here are the resources I found useful:

========================================== Advices from Open AI, Facebook AI leaders

Courses You MUST Take: Machine Learning by Andrew Ng ( /// Class notes: (

Yaser Abu-Mostafa’s Machine Learning course which focuses much more on theory than the Coursera class but it is still relevant for beginners.(

Neural Networks and Deep Learning (Recommended by Google Brain Team) (

Probabilistic Graphical Models (

Computational Neuroscience (

Statistical Machine Learning (

From Open AI CEO Greg Brockman on Quora

Deep Learning Book ( ( Also Recommended by Google Brain Team )

It contains essentially all the concepts and intuition needed for deep learning engineering (except reinforcement learning). by Greg

2. If you’d like to take courses: Linear Algebra — Stephen Boyd’s EE263 (Stanford) ( or Linear Algebra (MIT)(

Neural Networks for Machine Learning — Geoff Hinton (Coursera)

Neural Nets — Andrej Karpathy’s CS231N (Stanford)

Advanced Robotics (the MDP / optimal control lectures) — Pieter Abbeel’s CS287 (Berkeley)

Deep RL — John Schulman’s CS294–112 (Berkeley)

From Director of AI Research at Facebook and Professor at NYU Yann LeCun on Quora

In any case, take Calc I, Calc II, Calc III, Linear Algebra, Probability and Statistics, and as many physics courses as you can. But make sure you learn to program.

Thank you!
What does physics have to do with ML/AI?
"The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe"
One word:
How much it costs exactly ?
That isn't a word, per se bruh.
per se: in and of itself as such: exactly.

you meant as such.

Mar 18, 2017 · flor1s on Ask HN: Best books on AI?
The course is also quite easy to follow without buying the book. I love the exercises in which you are programming an intelligent agent to move through a maze. It reminded me of how we learned programming in university using Karel The Robot.

This alongside Andrew Ng's Machine Learning course was my first exposure to the field.

I can also recommend Sebastian Thrun's Artificial Ingelligence for Robotics course:

One of our guys started a tutorial series on writing an AI to play the game, DOOM. It assumes very little knowledge and is a fun exercise to do when taking a break from some of the great courses posted here. It's no replacement for the excellent course work, but it is fun.:

My favourite course for grasping the foundations of the concepts was Andrew NG's course (although it seems like you're beyond this now):

I think the best way to learn, is to build things though. Have you checked out the Kaggle challenges? Those will give you great practical skills.

Thanks for the insight!
I noticed that Andrew's course is still available.

Is it still a good introduction to machine learning? It's several years old now. Are there better courses available?

It's a good course.

The only(?) real criticism of it is the reliance on Matlab/Octave.

And even then, I can see why they use it - the syntax is pretty light and largely gets out of the way of the problem-space.

The only problem is that modern applied ML is dominated by Python...

I'm curious why there hasn't been a push towards more efficient compiled languages like golang?

I don't work with machine learning and AI, but I used to do a lot of server-side programming with dynamic language. Switching to golang has been great. I'm far more productive with it and the CPU and memory savings have been great. Isn't ML the kind of domain where CPU cycles and memory matter?

Because of GPUs.
That course is a very good basic machine learning course. It has many of the stepping stones to complex machine learning problems, no matter which specific field.

So yes, it is a good intro to machine learning. There are new advances coming up all the time. But you will definitely need to know most of the topics in that course. That is, if you want to properly understand most of the latest techniques.

It also works great psychologically - concretely, Andrew is capable of inspiring enthusiasm and confidence.
I see what you did there. :)

Someone even made a drinking game out of it:

Andrew Ng's Coursera course simply titled "Machine Learning" is good - it addresses the mathematics of fundamental algorithms and concepts while giving practical examples and applications:

Regarding books, there are many very high quality textbooks available (legitimately) for free online:

Introduction to Statistical Learning (James et al., 2014)

the above book shares some authors with the denser and more in-depth/advanced

The Elements of Statistical Learning (Hastie et al., 2009)

Information Theory: Inference & Learning Algorithms (MacKay, 2003)

Bayesian Reasoning & Machine Learning (Barber, 2012)

Deep Learning (Goodfellowet al., 2016)

Reinforcement Learning: An Introduction (Sutton & Barto, 1998)

^^ the above books are used on many graduate courses in machine learning and are varied in their approach and readability, but go deep into the fundamentals and theory of machine learning. Most contain primers on the relevant maths, too, so you can either use these to brush up on what you already know or as a starting point look for more relevant maths materials.

If you want more practical books/courses, more machine-learning focussed data science books can be helpful. For trying out what you've learned, Kaggle is great for providing data sets and problems.

Do this course and you'll get a very good idea of what's happening, and it's free:
I am taking Machine Learning on coursera by Andrew Ag. Initially I was intimidated by the idea of ML as I had no prior programming experience. I started learning data science stuff less than 6 months ago. I started to feel motivated and confident about ML by taking this course. I highly recommend it.

Thanks. I'll look that up. However, how much time does that takes you (weekly)?
It's self-paced, so it's kinda up to you. There are "deadlines" to help keep you on track, but they're optional. To pass, you just have to pass all the graded assignments by the end date. But if you're getting close to the end and are behind, you can always shift your enrollment to the following session - but you keep all your progress and everything. It's pretty cool in that regard. Very low pressure.
That's cool. Can you tell me how much time you spend on it per week in average? Just so I get a general idea...
I spend about min an hour a day. The general timeline is about 4 hrs per lectures, reading 1 hr reading (mostly optional) and 3 hrs per assignment. I take longer time on assignment since I am new. Sometimes I have to watch lectures over and over again to understand. In general I spend about 10 hrs/week.

I don't think there is any value in paying for the certification, since its an introductory course.

It's been a while since I took it, but I think I spent an hour or two a week watching the videos and reading notes and whatever, and then maybe another 3-5 hours on the programming assignment. It was probably less than that on the earlier programming assignments, and more on the later ones as things got more complicated later on. And I might have spent more time on the videos on certain sections, because of re-watching sections that weren't intuitively clear to me right away. In particular, some of the math'ier stuff where he explained the stuff about using partial derivatives to calculate the error gradients for neural networks... that stuff I had to put more work into since my math background isn't real strong (I never took multi-variable calculus).

All of that said, you can get through the class and learn and understand the material at the level he teaches it, even without completely understanding partial derivatives (a point he makes in the lecture). But having a strong calculus background certainly wouldn't hurt.

Machine Learning by Andrew Ng

The maths is fairly straightforward and the concepts are explained well.

I started that one and feel like the format doesn't take much advantage of the medium: the videos are very much like traditional lectures. Of course being free is an immense plus, but then so are many books.

Specifically, I take issue with NG's foreign accent. That's just me looking for a reason, but it's the second time after Agarwal on MITx (and that wanted me to purchase his book). Also, I can already record lectures and play them back at will, only I have to leave the house for that. Besides that, these courses were rather classical university courses, it seems.

My gripe is, the videos are too long and my attention span too short. The first 2 weeks I could even pass just from what I learned reading HN, so I suppose I really prefer the socratic method instead of frontal education. Many would claim I was simply lazy and they'd be right, alas also derogitory.

note to myself: otoh, paying a fee for the effort to run a course is most reasonable if the value added comes in form of a book to keep. Actually that's what I am missing in those courses, where instructors don't provide even scripts.
TL;DR - read my post's "tag" and take those courses!


As you can see in my "tag" on my post - most of what I have learned came from these courses:

1. AI Class / ML Class (Stanford-sponsored, Fall 2011)

2. Udacity CS373 (2012) -

3. Udacity Self-Driving Car Engineer Nanodegree (currently taking) -

For the first two (AI and ML Class) - these two MOOCs kicked off the founding of Udacity and Coursera (respectively). The classes are also available from each:

Udacity: Intro to AI (What was "AI Class"):

Coursera: Machine Learning (What was "ML Class"):

Now - a few notes: For any of these, you'll want a good understanding of linear algebra (mainly matrices/vectors and the math to manipulate them), stats and probabilities, and to a lessor extent, calculus (basic info on derivatives). Khan Academy or other sources can get you there (I think Coursera and Udacity have courses for these, too - plus there are a ton of other MOOCs plus MITs Open Courseware).

Also - and this is something I haven't noted before - but the terms "Artificial Intelligence" and "Machine Learning" don't necessarily mean the same thing. Based on what I have learned, it seems like artificial intelligence mainly revolves around modern understandings of artificial neural networks and deep learning - and is a subset of machine learning. Machine learning, though, also encompasses standard "algorithmic" learning techniques, like logistic and linear regression.

The reason why neural networks is a subset of ML, is because a trained neural network ultimately implements a form of logistic (categorization, true/false, etc) or linear regression (range) - depending on how the network is set up and trained. The power of a neural network comes from not having to find all of the dependencies (iow, the "function"); instead the network learns them from the data. It ends up being a "black box" algorithm, but it allows the ability to work with datasets that are much larger and more complex than what the algorithmic approaches allow for (that said, the algorithmic approaches are useful, in that they use much less processing power and are easier to understand - no use attempting to drive a tack with a sledgehammer).

With that in mind, the sequence to learn this stuff would probably be:

1. Make sure you understand your basics: Linear Algebra, stats and probabilities, and derivatives

2. Take a course or read a book on basic machine learning techniques (linear regression, logistic regression, gradient descent, etc).

3. Delve into simple artificial neural networks (which may be a part of the machine learning curriculum): understand what feed-forward and back-prop are, how a simple network can learn logic (XOR, AND, etc), how a simple network can answer "yes/no" and/or categorical questions (basic MNIST dataset). Understand how they "learn" the various regression algorithms.

4. Jump into artificial intelligence and deep learning - implement a simple neural network library, learn tensorflow and keras, convolutional networks, and so forth...

Now - regarding self-driving vehicles - they necessarily use all of the above, and more - including more than a bit of "mechanical" techniques: Use OpenCV or another machine vision library to pick out details of the road and other objects - which might then be able to be processed by a deep learning CNN - ex: Have a system that picks out "road sign" object from a camera, then categorizes them to "read" them and use the information to make decisions on how to drive the car (come to a stop, or keep at a set speed). In essence, you've just made a portion of Tesla's vehicle assist system (first project we did in the course I am taking now was to "follow lane lines" - the main ingredient behind "lane assist" technology - used nothing but OpenCV and Python). You'll also likely learn stuff about Kalman filters, pathfinding algos, sensor fusion, SLAM, PID controllers, etc.

I can't really recommend any books to you, given my level of knowledge. I've read more than a few, but most of them would be considered "out of date". One that is still being used in university level courses is this:

Note that it is a textbook, with textbook pricing...

Another one that I have heard is good for learning neural networks with is:

There are tons of other resources online - the problem is separating the wheat from the chaff, because some of the stuff is outdated or even considered non-useful. There are many research papers out there that can be bewildering. I would say if you read them, until you know which is what, take them all with a grain of salt - research papers and web-sites alike. There's also the problem of finding diamonds in the rough (for instance, LeNet was created in the 1990s - but that was also in the middle of an AI winter, and some of the stuff written at the time isn't considered as useful today - but LeNet is a foundational work of today's ML/AI practices).

Now - history: You would do yourself good to understand the history of AI and ML, the debates, the arguments, etc. The base foundational work come from McCulloch and Pitts concept of an artificial neuron, and where that led:

Also - Alan Turing anticipated neural networks of the kind that wasn't seen until much later:

...I don't know if he was aware of McCulloch and Pitts work which came prior, as they were coming at the problem from the physiological side of things; a classic case where inter-disciplinary work might have benefitted all (?).

You might want to also look into the philosophical side of things - theory of mind stuff, and some of the "greats" there (Minsky, Searle, etc); also look into the books written and edited by Douglas Hofstadter:,_Escher,_Bach

There's also the "lesser known" or "controversial" historical people:

* Hugo De Garis (CAM-Brain Machine)

* Igor Aleksander

* Donald Michie (MENACE)

...among others. It's interesting - De Garis was a very controversial figure, and most of his work, for whatever it is worth - has kinda been swept under the rug. He built a few computers that were FPGA based hardware neural network machines that used cellular automata a-life to "evolve" neural networks. There were only a handful of these machines made; aesthetically, their designs were as "sexy" as the old Cray computers (seriously).

Donald Michie's MENACE - interestingly enough - was a "learning computer" made of matchboxes and beads. It essentially implemented a simple neural network that learned how to play (and win at) naughts and crosses (TIC-TAC-TOE). All in a physically (by hand) manipulated "machine".

Then there is one guy, who is "reviled" in the old-school AI community on the internet (take a look at some of the old newsgroup archives, among others). His nom-de-plume is "Mentifex" and he wrote something called "MIND.Forth" (and translated it to a ton of other languages), that he claimed was a real learning system/program/whatever. His real name is "Arthur T. Murray" - and he is widely considered to be one of the earliest "cranks" on the internet:

Heck - just by posting this I might be summoning him here! Seriously - this guy gets around.

Even so - I'm of the opinion that it might be useful for people to know about him, so they don't go to far down his rabbit-hole; at the same time, I have a small feeling that there might be a gem or two hidden inside his system or elsewhere. Maybe not, but I like to keep a somewhat open mind about these kinds of things, and not just dismiss them out of hand (but I still keep in mind the opinions of those more learned and experienced than me).

EDIT: formatting

Oh man, thank you! Thank you!
Without doubt do the Andrew Ng course on Machine Learning.

It's excellent.

I've started it already, seems like this one and the AI udacity course are the two best ones for now :) Thanks!
Perhaps not exactly the right link, but Andrew Ng's Machine Learning course (also from Stanford) teaches exactly the required things in the first three weeks:
It's been a while since I did any serious maths, would I be lost in course like this ?
No. The basic linear algebra and calculus concepts used on the course are rather simple, and the videos hold your hand through the math quite well. You don't need deep understanding of the math to pass the course.
Probably not. I'm currently about halfway through and the majority of required mathematics revolves around linear algebra (matrix multiplication almost exclusively) and basic algebra. There is a linear algebra refresher as well.
I got 100% on the course and don't really do math myself. He skips over the derivations and gives the formula. The way you interact with the math is by turning a formula into code. It's actually fun and refreshing.
See this course on linear algebra:

And this one on Machine Learning:

> As someone who's interested in taking the Udacity course, would your recommend it?

So far, yes - but that has a few caveats:

See - I have some background prior to this, and I think it biases me a bit. First, I was one of the cohort that took the Stanford-sponsored ML Class (Andrew Ng) and AI Class (Thrun/Norvig), in 2011. While I wasn't able to complete the AI Class (due to personal reasons), I did complete the ML Class.

Both of these courses are now offered by Udacity (AI Class) and Coursera (ML Class):

If you have never done any of this before, I encourage you to look into these courses first. IIRC, they are both free and self-paced online. I honestly found the ML Class to be easier than the AI class when I took them - but that was before the founding of these two MOOC-focused companies, so the content may have changed or been made more understandable since then.

In fact, now that I think about it, I might try taking those courses again myself as a refresher!

After that (and kicking myself for dropping out of the AI Class - but I didn't have a real choice there at the time), in 2012 Udacity started, and because of (reasons...) they couldn't offer the AI Class as a course (while for some reason, Coursera could offer the ML Class - there must have been licensing issues or something) - so instead, they offered their CS373 course in 2012 (at the time, titled "How to Build Your Own Self-Driving Vehicle" or something like that - quite a lofty title):

I jumped at it - and completed it as well; I found it to be a great course, and while difficult, it was very enlightening on several fronts (for the first time, it clearly explained to me exactly how a Kalman filter and PID worked!).

So - I have that background, plus everything else I have read before then or since (AI/ML has been a side interest of mine since I was a child - I'm 43 now).

My suggestion if you are just starting would be to take the courses in roughly this order - and only after you are fairly comfortable with both linear algebra concepts (mainly vectors/matrices math - dot product and the like) and stats/probabilities. To a certain extent (and I have found this out with this current Udacity course), having a knowledge of some basic calculus concepts (derivatives mainly) will be of help - but so far, despite that minor handicap, I've been ok without that greater knowledge - but I do intend to learn it:

1. Coursera ML Class 2. Udacity AI Class 3. Udacity CS373 course 4. Udacity Self-Driving Car Engineer Nanodegree

> Do you think the course prepares you enough find a Self-Driving developer job?

I honestly think it will - but I also have over 25 years under my belt as a professional software developer/engineer. Ultimately, it - along with the other courses I took - will (and have) help me in having other tools and ideas to bring to bear on problems. Also - realize that this knowledge can apply to multiple domains - not just vehicles. Marketing, robotics, design - heck, you name it - all will need or do currently need people who understand machine learning techniques.

> Would you learn enough to compete/work along side people who got their Masters/PhD in Machine Learning?

I believe you could, depending on your prior background. That said, don't think that these courses could ever substitute for graduate degree in ML - but I do think they could be a great stepping stone. I am actually planning on looking into getting my BA then Masters (hopefully) in Comp Sci after completing this course. Its something I should have done long ago, but better late than never, I guess! All I currently have is an associates from a tech school (worth almost nothing), and my high school diploma - but that, plus my willingness to constantly learn and stay ahead in my skills has never let me down career-wise! So I think having this ML experience will ultimately be a plus.

Worst-case scenario: I can use what I have learned in the development of a homebrew UGV (unmanned ground vehicle) I've been working at on and off for the past few years (mostly "off" - lol).

> Appreciate your input.

No problem, I hope my thoughts help - if you have other questions, PM me...

i was quite put off by it. i feel like the teaching technique is pretty poor and the focus in on all the wrong things. mainly the tech gets in the way for learning. i don't want to figure out how to learn numpy when i'm trying to learn how to understand deep learning, that in itself is hard enough. i quite after a week (i did the stanford course first and this was going to be my second).

i would recommend the coursera course by andrew ng. i had an amazing time. the code stays out of your way and he walks you through the algorithms and explains the theory very well.

i just started the by jeremy howard, and literally have been blown away but the course. it is AMAZING! by lesson 3 i'm able to build cnn models and score on top 20% in kaggle competitions. not bad for a complete novice. HIGHLY RECOMMENDED.

once im done with the course i may look back around to google's deep learning course. i think it may be easier for more experienced users to digest its info.

Edit: added link

These 3 are the most well know and well regarded 0-to-hero type intro courses online, and high-school math is sufficient to follow along (but pick only one and go start to finish!):

* (by Peter Norvig - director of research @ Google & Sebastian Thrun - lead dev of google self driving car and founder of google x, now at gerogia tech uni) - great if you want a more "deep thinking" style intro to AI

* (Sebastian Thrun & Katie Malone - former physicist and data scientist great at explaining stuff so that anyone can grok it) - great if you want a more "down to earth" engineering style intro with simple clear examples

* (Andrew Ng @ Stanford & chied scientist at Baidu, former Google researcher) - great if you want a "bottom up", from math, through code/engineering, with less fuzzy big picture stuff - this is a great intro, even if Andrew Ng is less of a rock-star-presenter, if you want to start from math details up take this one!

Oh, and kaggle: . If you get stuck on anything, google the relevant math, pick up just enough to have an intuition and carry on.

You're still in college so you have plenty of time to learn well the required math, it's better to get a broad picture of the field ASAP imho! Then when you'll take the math classes, you'll already have "aha, this feels my gap about X and Y" and "aha, now I get why Z" and you'll really love that math after you already know what problems it solves!

(PS if you're less of a "highly logico-intuitive" person and more "analytical rigorous thinker" instead, just ignore my last paragraph and focus on the math, but try to get some deep intuition of probability along the way)

I'll try to do the coursera course on ML -- last time I tried I got swamped by schoolwork. Thanks for the suggestions.
The Coursera Machine Learning course just started (I assume you could still join). I just finished the second week (I'm trying to keep a week ahead due to the somewhat unpredictable nature of my schedule lately), and have been enjoying it so far.

The first couple weeks are all about univariate and multivariate linear regression (as well as an optional linear algebra refresher on matrix operations).

I second this course; I took it when it was put on in 2011 in association with Stanford, and called the "ML Class" - its success was the catalyst for the creation of Coursera.
I'll tell you how I started my journey:

I took the Stanford ML Class in 2011 taught by Andrew Ng; ultimately, Coursera was born from it, and you can still find that class in their offerings:

On a similar note, Udacity sprung up from the AI Class that ran at the same time (taught by Peter Norvig and Sebastian Thrun); Udacity has since added the class to their lineup (though at the time, they had trouble doing this - and so spawned the CS373 course):

I took the CS373 course later in 2012 (I had started the AI Class, but had to drop out due to personal issues at the time).

Today I am currently taking Udacity's "Self-Driving Car Engineer" nanodegree program.

But it all started with the ML Class. Prior to that, I had played around with things on my own, but nothing really made a whole lot of sense for me, because I lacked some of the basic insights, which the ML Class course gave to me.

Primarily - and these are key (and if you don't have an idea about them, then you should study them first):

1. Machine learning uses a lot of tools based on and around probabilities and statistics.

2. Machine learning uses a good amount of linear algebra

3. Neural networks use a lot of matrix math (which is why they can be fast and scale - especially with GPUs and other multi-core systems)

4. If you want to go beyond the "black box" aspect of machine learning - brush up on your calculus (mainly derivatives).

That last one is what I am currently struggling with and working through; while the course I am taking currently isn't stressing this part, I want to know more about what is going on "under the hood" so to speak. Right now, we are neck deep into learning TensorFlow (with Python); TensorFlow actually makes things pretty simple to create neural networks, but having the understanding of how forward and back-prop works (because in the ML Class we had to implement this using Octave - we didn't use a library) has been extremely helpful.

Did I find the ML Class difficult? Yeah - I did. I hadn't touched linear algebra in 20+ years when I took the course, and I certainly hadn't any skills in probabilities (so, Kahn Academy and the like to the rescue). Even now, while things are a bit easier, I am still finding certain tasks and such challenging in this nanodegree course. But then, if you aren't challenged, you aren't learning.

Thank you for the comprehensive response! I have a lot to learn but it's exciting.
Learn Python stack: scipy, numpy, pandas, scikit-learn, jupyter, matplotlib/seaborn.

Learn machine learning tools: XGBoost, Scikit-learn, Keras, Vowpal Wabbit.

Do data science competitions: Kaggle, DrivenData, TopCoder, Numerai.

Take these courses:

Work on soft skills: Business, management, visualization, reporting.

Do at least one real-life data science project: Open Data, Journalism, Pet project.

Contribute to community: Create wrappers, open issues/pull reqs, write tutorials, write about projects.

Read: FastML, /r/MachineLearning, Kaggle Forums, Arxiv Sanity Preserver.

Implement: Recent papers, older algorithms, winning solutions.

As a software engineer you have a major advantage for applied ML: You know how to code. AI is just Advanced Informatics. If you want to become a machine learning researcher... skip all this and start from scratch: a PhD. Else: Learn by doing. Only those who got burned by overfit, will know how to avoid it next time.

For a ML intro Coursera's machine learning course is great. I have not been through the entire course but for someone who has no background in it, its a good intro as the video themselves are solid.
Gain background knowledge first, it will make your life much easier. It will also make the difference between just running black box libraries and understanding what's happening. Make sure you're comfortable with linear algebra (matrix manipulation) and probability theory. You don't need advanced probability theory, but you should be comfortable with the notions of discrete and continuous random variables and probability distributions.

Khan Academy looks like a good beginning for linear algebra:

MIT 6.041SC seems like a good beginning for probability theory:

Then, for machine learning itself, pretty much everyone agrees that Andrew Ng's class on Coursera is a good introduction:

If you like books, "Pattern Recognition and Machine Learning" by Chris Bishop is an excellent reference of "traditional" machine learning (i.e., without deep learning).

"Machine Learning: a Probabilistic Perspective" book by Kevin Murphy is also an excellent (and heavy) book:

This online book is a very good resource to gain intuitive and practical knowledge about neural networks and deep learning:

Finally, I think it's very beneficial to spend time on probabilistic graphical models. Here is a good resource:

Have fun!

Courses You MUST Take:

1. Machine Learning by Andrew Ng ( /// Class notes: (

2. Yaser Abu-Mostafa’s Machine Learning course which focuses much more on theory than the Coursera class but it is still relevant for beginners.(

3. Neural Networks and Deep Learning (Recommended by Google Brain Team) (

4. Probabilistic Graphical Models (

4. Computational Neuroscience (

5. Statistical Machine Learning (

If you want to learn AI:

If you want to get started with machine learning you MUST take computational neuroscience? I don't think so.
You can start with free coursera course, it starts 17 oct

and then continue with

If you want to jump right in with "hello world" type TensorFlow (a tool for machine learning), see (how to fit a straight line using TensorFlow)

If you like to study/read: the famous Coursera Andrew Ng machine learning course:

If you just want course materials from UC Berkeley, here's their 101 course:

If you want a web based intro to a "simpler" machine learning approach, "decision trees":

Here's a list of top "deep learning" projects on Github and great HN commentary on some tips on getting started:

If you just want a high level overview:

Oh btw, I think it's too annoying to 'follow the pace' on Moocs so I recommend you can download all the courses right here:

I would be pretty hesitant to start talking about TensorFlow and Deep Learning before confirming, for example, at least a rudimentary understanding of Linear Algebra.
What is a good "hello world" project for machine learning? That is, what problem can I solve or question can I answer with minimal ceremony, and ideally with multiple techniques / technologies so that I can compare them? Is it this house price estimation like in your last link, or is there something better than that?
You could watch the 'Hello World' Machine Learning Recipes videos by Josh Gordon at Google. Very approachable!

The first lesson is a classifier to separate apples and oranges.

Kaggle has a number of starter challenges. See for one related to predicting the survival of passengers on the Titanic.
Lol. Predicting the survival of passengers on the Titanic is meaningless and misleading - there is literally no connection to reality, despite the framing of the task which suggests a certain connection. There is absolutely nothing that could be predicted. It is just a simulation of oversimplified model which describes nothing, but an oversimplified view of a historical event. It is as meaningless as the ant simulator written by Rich Hickey to demonstrate features of Clojure - it has that much connection to real ants.
You seem to be saying a whole lot without backing up your argument. If anything, your view is meaningless.

There is a very important notion from The Sciences of the Artificial book by Herbert A. Simon, that the visible (to an external observer) behavior of an ant (its tracks, if you wish) is not due to its supposed intelligence, but mostly due to the obstacles in the environment.

Most of the models mimic and simulate (very naively) that observable behavior, not its origin.

When people cite "the map is not the territory" they mean this. Simulation is not even an experiment. It is mere an animation of a model - a cartoon.

It is swarm intelligence: How does the system keep finding successful paths in a changing environment? Can we take inspiration from this behavior to create better optimization algorithms?

Simulation can be a very beneficial experiment. See for instance:

Why not. I remember a paper which compares the behavior of foraging ants (they send more or less ants according to the rate of returned with food) to adjustment of the window size based on the data rate in TCP.

Simulations are not experiments. It is an animation of a formalized imagination, if you wish.

Huh? Why would a connection to reality be required to get started with machine learning?
Because otherwise it should be called machine hallucinations?

The process of learning could be defined as a task of extraction of relevant information (knowledge) about reality (shared environment) not mere accumulation of a fancy nonsense or false beliefs.

So knowledge like: Did the passenger have kids on board? Was the passenger nobility? Was the passenger travelling first class? Where was the passenger located on the ship after boarding? And how do these factors influence survivability?

And reality like: The actual sinking of the Titanic?

If your model concludes that nobility, traveling first class, close to the exits, without family, has a higher chance of surviving, then this is fancy nonsense or a false belief?

You make a really strange case for your view.

Correlations does not imply causation. There were many more relevant but "invisible" variables, which, probably, related to some genetic factors, like ability to sustain exposure to the cold water, ability to calm oneself down to avoid panic and self-control in general, strong survival instinct to literally fight the others, etc. The variables you have described, except the age of a passenger, are visible but irrelevant. And pure luck must have a way bigger weight and it is, obviously, related to the genetic favorable factors, age, health and fitness.
This challenge is not about causal inference. I do agree it is more of a toy dataset, to get started with the basics, and that there are a lot of other variables that go into survivability. But to say these variables, except for age, are irrelevant is mathematically unsound: You can show with cross-validation and test set performance that your model using these variables generalizes (around 0.80 ROC AUC). You can do statistical/information theoretical tests that show the majority of these variables is a significant signal for predicting the target.

In real life it is also very rare to have free pickings of the variables you want. Some variables have to substituted with available ones.

The Titanic story is to make things interesting for beginners. One could leave out all the semantics of this challenge, anonymize the variables and the target, and still use this dataset to learn about going from a table with variables to a target. In fact, doing so teaches you to leave your human bias at the door. Domain experts get beaten on Kaggle, because they think they need other variables, or that some variables (and their interactions) can't possibly work.

Let the data and evaluation metric do the talking.

How does this not violate [1]? That is, this seems specifically anti-statistical. The best you can come up with on this is a predictive model that you then have to test on new events. In this case, that would likely mean new crashes.


Because we are not doing hypothesis testing, we are doing classification on a toy dataset. Sure, one could treat this as a forecasting challenge, but then one would need another Titanic sinking in roughly the same context, with the same features... That demand is as unreasonable as calling this modeling knowledge competition meaningless.

And if you see classification as a form of hypothesis testing, then cross-validation is a valid way of testing if hypothesis holds on unseen data.

I think that is a rub. With the goal just being to find some variables that correlate together, it is a neat project. But, ultimately not indicative of a predictive classification. If only due to the fact that you do not have any independent samples to cross validate with. All samples being from the same crash.

This would be like sampling all coins from my pockets and thinking you could build a predictive model of year printed to value of coin. Probably could for the change I carry. Not a wise predictor, though.

You are right, but only in a very strict, not-fun, manner :). Even if we had more data on different boats sinking, the model would not be very useful: We don't go with the Titanic anymore and plotted all icebergs. Still, if a cruise ship were to go down, I'd place a bet on ranking younger women of nobility traveling first class higher for survivability than old men with family traveling third class, wise predictor or no.

This dataset is more in line with what you are looking for:

Makes sense. And yes, I completely meant my points in a pedantic only manner. :)
> You can show with cross-validation and test set performance that your model using these variables generalizes (around 0.80 ROC AUC).

It shows only that given set of variables (observable and inferred) could be used to build a model. The given data set is not descriptive, because it does not contain more relevant hidden variables, so any predictions or inferences based on this data set are nothing but a story, a myth made from statistics and data.

>> Domain experts get beaten on Kaggle, because they think they need other variables, or that some variables (and their interactions) can't possibly work.

That sounds a bit iffy. A domain expert should really know what they're talking about, or they're not a domain expert. If the real deal gets beaten on Kaggle it must mean that Kaggle is wrong, not the domain expert.

Not that domain experts are infallible, but if it's a systematic occurrence then the problem is with the data used on Kaggle, not with the knowledge of the experts.

I mean, the whole point of scientific training and research is to have domain experts who know their shit, know what I mean?

The people who win Kaggle competitions are consistently machine learning experts, not domain experts.


> Since our goal was to demonstrate the power of our models, we did no feature engineering and only minimal preprocessing. The only preprocessing we did was occasionally, for some models, to log-transform each individual input feature/covariate. Whenever possible, we prefer to learn features rather than engineer them. This preference probably gives us a disadvantage relative to other Kaggle competitors who have more practice doing effective feature engineering. In this case, however, it worked out well.

> Q: Do you have any prior experience or domain knowledge that helped you succeed in this competition? A: In fact, no. It was a very good opportunity to learn about image processing.

> Do you have any prior experience or domain knowledge that helped you succeed in this competition? I didn't have any knowledge about this domain. The topic is quite new and I couldn't find any papers related to this problem, most probably because there are not public datasets.

> Do you have any prior experience or domain knowledge that helped you succeed in this competition? M: I have worked in companies that sold items that looked like tubes, but nothing really relevant for the competition. J: Well, I have a basic understanding of what a tube is. L: Not a clue. G: No.

> We had no domain knowledge, so we could only go on the information provided by the organizers (well honestly that and Wikipedia). It turned out to be enough though. Robert says it cannot happen again, so we’re currently in the process of hiring a marine biologist ;).

> Through Kaggle and my current job as a research scientist I’ve learnt lots of interesting things about various application domains, but simultaneously I’ve regularly been surprised by how domain expertise often takes a backseat. If enough data is available, it seems that you actually need to know very little about a problem domain to build effective models, nowadays. Of course it still helps to exploit any prior knowledge about the data that you may have (I’ve done some work on taking advantage of rotational symmetry in convnets myself), but it’s not as crucial to getting decent results as it once was.

> Oh yes. Every time a new competition comes out, the experts say: "We've built a whole industry around this. We know the answers." And after a couple of weeks, they get blown out of the water.

Competitions have been won without even looking at the data. Data scientists/machine learners are in the business of automating things -- so why should domain knowledge be any different?

Ok, sure it can help, but it is not necessary, and can even hamper your progress: You are searching for where you think the answer is -- thousands are searching everywhere and finding more than you, the expert, can.

It's very closely correlated to reality.

If you work through the data, you'll find things like women, children and first class passengers had a higher survival rate than men with lower class tickets[1].

This matches exactly the stories of what happened: Staff took first class passengers to the lifeboats first, then women and children. Then they ran out of lifeboats.

So the data shows correlation, and eye-witness accounts shows causation. That's close to the ideal combination: eyewitness accounts can be unreliable because we can't know how widespread they are, and correlation doesn't show causation.

But the combination of them both is pretty much the best case for studying something which can't be replicated.

[1] See examples like

This is only one of many aspects of that event. The data reflects that the efforts of organized evacuation in the beginning were efficient.

But any attempt to frame it as a "prediction", an accurate model of the event or adequate description of reality is just nonsense.

To call things by its proper names (precise use of the language) is the foundation of the scientific method. This is mere oversimplified, non-descriptive toy model of one aspect of historical event, made from of statistics of partially observable environment. A few inferred correlations reflects that there was not a total chaos, but some systematic activity. No doubt about it. But this is absolutely unscientific to say anything else about the toy model, let alone claim that any predictions based on it have any connection to reality.

That is absolute nonsense.

There is clear correlation between gender and survival rates. Given the data, a decent prior would absolutely take that into account.

Yes, there are other factors. But the foundation of statistical models is simplification, and descriptive statistics are an important foundation of that.

In any case, it isn't exactly clear that there are magical hidden factors which predicted survival. It appears you maybe unfamiliar with the event, because basically those who got into a lifeboat survived, and those who didn't, didn't survive.

To quote Wikipedia:

Almost all those who jumped or fell into the water drowned within minutes due to the effects of hypothermia.... The disaster caused widespread outrage over the lack of lifeboats, lax regulations, and the unequal treatment of the three passenger classes during the evacuation..... The thoroughness of the muster was heavily dependent on the class of the passengers; the first-class stewards were in charge of only a few cabins, while those responsible for the second- and third-class passengers had to manage large numbers of people. The first-class stewards provided hands-on assistance, helping their charges to get dressed and bringing them out onto the deck. With far more people to deal with, the second- and third-class stewards mostly confined their efforts to throwing open doors and telling passengers to put on lifebelts and come up top. In third class, passengers were largely left to their own devices after being informed of the need to come on deck.

Even more tellingly:

The two officers interpreted the "women and children" evacuation order differently; Murdoch took it to mean women and children first, while Lightoller took it to mean women and children only. Lightoller lowered lifeboats with empty seats if there were no women and children waiting to board, while Murdoch allowed a limited number of men to board if all the nearby women and children had embarked

All this behavior matches exactly what the model tells us about the event.

I'd be very interested if you can point to something specific that is wrong about it.

All models are wrong, but some are useful.

> All models are wrong, but some are useful.


All predictions are wrong and make no sense for partially observable, multiple causation, mostly stochastic phenomena. It will never be the same.

Except that this model was useful.

The Titanic's sister ship (the Brittanic) was torpedoed during WW1 and sunk. However, the lesson of the Titanic (too few lifeboats) had been learnt, and only 26 people died.

I don't know what point you are trying to make - yes, I agree that history never repeats, but lessons can be learnt from it, and they can be quantified and they can be useful.

My point was in my first comment.

OK, tell me, please, what it is that you can predict? That some John Doe, having the first class ticket in a cabin next to the exit would survive the collision of the next Titanic with a new iceberg? That being a woman gives you better chances to secure a seat in a lifeboat? What is the meaning of the word "predict" here?

>> The Titanic's sister ship (the Brittanic) was torpedoed during WW1 and sunk. However, the lesson of the Titanic (too few lifeboats) had been learnt, and only 26 people died.

This happened because they made a _statistical_ model of the Titanic disaster, and learned from it? Like, they actually crunched the numbers and plotted a few curves etc, and then said "aha, we need more boats"?

I kind of doubt it, and if it wasn't the case then you can't very well talk about a "model", in this context. It's more like they had a theory of what factor most heavily affected survival and acted on it. But I'd be really surprised to find statistics played any role in this.

This happened because they made a _statistical_ model of the Titanic disaster, and learned from it?

No - statistics as the discipline that we think of today wasn't really around until the work of Gosset[1] and Fisher[2] which was done a few years after this.

I'm sure you noted that I was very careful with what I claimed: "the lesson of the Titanic (too few lifeboats) had been learnt".

These days we'd quantify the lesson with statistics. Then, they didn't have that tool.

Instead, we have testimony[3] relaying the same story: Just one question. Have you any notion as to which class the majority of passengers in your boat belonged? - (A.) I think they belonged mostly to the third or second. I could not recognise them when I saw them in the first class, and I should have known them if there were any prominent people. (Q.) Most of them were in the boat when you came along? - (A.) No. (Q.) You put them in? - (A.) No. Mr. Ismay tried to walk round and get a lot of women to come to our boat. He took them across to the starboard side then - our boat was standing - I stood by my boat a good ten minutes or a quarter of an hour. (Q.) At that time did the women display a disinclination to enter the boat? - (A.) Yes."

So yes, I agree - it was a theory, which our modern modelling tools can show matched well with what the statistics showed happened.

My whole point is that this is very useful, unlike the OP who dismissed it as useless.




I think you are making the point for him. If you look at the predictive models people make on these, they make a big deal about your sex and status being the main indicators of who survived. The reality is that the main causal indicator for survival was access to a lifeboat.

Now, it so happens that that correlated heavily with class. But, not as much as with sex. Though, there were some places where being male hurt your chances (as you point out in the one officer not allowing men on boats), by and large these were secondary and correlated with success, not predictors of it.

Nice, that is a great pointer.
The Iris data set [1] is very famous and a popular way to test out classification techniques. It's not "big data", but can be used to familiarize yourself with some basic data mining techniques.

[1] –

I don't know anything about tensor flow except the very tip of the iceberg.

Can you know nothing about ml, ai, data analysis, and stats then give tensor flow some input and it will give you some input and pretty much apply it to your app?

Or do you have to know these subjects before even starting tensor flow?

Yes, you can use TensorFlow directly and not know much, here are more examples

It's OK to jump in and try it without having background information. See how far you get and start researching when you hit a wall or find sudden interest.


Thanks for the link

Regarding deep learning, what are some resources for learning strategies about improving network architectures?

I read all of these architectures in research papers, but I'd really love to learn how to start iterating on them for a particular domain.

I actually recommend jumping right into the excellent Scikit-learn tutorials,

Unlike some of the other complicated tools, sklearn is just a "pip install" away and includes all sorts of examples of different problems. Classification? Regression? Clustering? Representation learning? Perceptual embedding? Odds are, some part of sklearn covers all of that.

having done ML R&D for a few years, they're docs are great for orienting newcomers to the field
The scikit-learn tutorials are great. Another nice thing about scikit-learn is that the api for a lot of different ML algorithms is very similar, almost identical.

This means that you can set up a train and and test set and swap in and out random forest, svm, naive bases, logistic regression, and various others.

Read about them one by one, try to understand the algorithms generally, test them out, see how they perform differently on different data sets.

It all depends on how you like to approach a new subject, but I think this is more fun and motivating than going straight into the mathematics behind the algorithms right away (which is more along the lines Andrew Ng's excellent course). I'd say once you're into it and using the algorithms, then dig deeper into the core mathematics, you'll have a better context for it.

It really depends on how you learn.

Traditionally the best answer is to do Andrew Ng's Machine Learning course[1]. It's a great course, and you won't regret doing it, but it is kind of annoying that it is in a language (Matlab/Octave) you'll (hopefully) never use again.

A lot of people now recommend working through CS229[2]. I haven't looked at it depth, but I've been impressed with a lot of the class projects.

If you like books, then Statistical Learning in R[3] is generally well regarded.

If you like doing stuff, then Kaggle and SciKit-Learn will throw you in the deep-end. Just be aware you can't just program, though - you really do need to understand some theory. It's good to run into a problem, and then really, really understand the reasons behind what you are seeing.




Thank you.
If you want to do Andrew Ng's ML course, but want to do it in python:
So I've been feverishly researching both topics, and how to learn them. I've decided to take these 2 courses from Coursera (starting soon, in Oct 2016). Both are free, and for the ML course, one can pay after completing it and get a certificate (there's no certificate for the Bitcoin course).

Hope you too will join these, and post updates on this thread right here.

> Is there any way for those of us with average intelligence to contribute to tools like this?

The people doing this aren't magical geniuses; they've just put the time and work into the subject and have been able to get themselves into a position they can do this all day surrounded by others they can collaborate with.

As with most human endeavors, the trick is to just get started, and not get frustrated and give up when it turns out you don't know anything at first. Some people don't mind starting with a ton of abstract learning about the subject, others prefer trying to accomplish specific tasks, learning the theory along the way.

For your specific question, as with all software, there's likely a lot to be done that has little to do with the main task of the tool and is just the everyday tasks of ease of use, interoperation with other tools, testing, etc. If on the other hand you want to get into the scary world of AI, like many others I'd recommend the Coursera machine learning course as a great place to start[1]


The course on coursera uses Octave for the exercises. This guy has rewritten all the exercises in Python:
I completed that course awhile back. Definitely a really great course if you can keep up. It is quite difficult, and I was quite terrible at it.

I don't think I have any affinity for the topic, but at least now I can read/discuss the topic without being completely blind. There are also a lot of smaller components in the course that I found useful even not working directly in AI/ML. Just some general data modelling and linear algebra stuff was nice to pick up on the ride.

Some AI professors recommend first jumping into a ML framework such as scikit-learn or Keras, which are more approachable, playing with them to the point where a little practical intuition will develop, and only then follow up with theory. Works better in practice than loading up on courses and math at the beginning, and helps the student form practical interest in the field and be more emotionally invested. In other words, first, hack your way in, and only then open the books.
Good advice for life in general.
I think this is the exact reason most people tend to struggle with any type of learning. Even topics like math and science which require building block approaches can benefit from taking a high level view and refining where appropriate.
Thank you for the link that course!
> The people doing this aren't magical geniuses

Says the person named 'magicalist'. :)

double upvote
Thank you, I actually just found him on Coursera( before your reply. I will definitely check it out.

I don't think that I want to make the switch, just know what is what and dabble a little bit.

it's very good, but personally I'd start with the Andrew Ng and Hinton courses

I think the Udacity course is best if you know principles of machine learning and want to apply them in a more professional toolchain and learn Tensorflow

One hypothesis is that there is a trend of non ML-familiar developers currently working on getting more grasp on ML. Such repositories provide something that e.g. web developers can take a look at, with reduced friction.

Disclaimer: I'm such a developer! (currently going through the last bits of - but I've noticed other around me recently.

Here is a list of top 16 machine learning books aimed at the data scientist (not what you asked but there is a short description with each book). These books are used by universities like Stanford, Caltech, MIT, Harvard, etc.

And just to make sure that you are aware of it, there are lots of opencourseware with lecture notes and videos.

Caltech/Yaser Abu-Mostafa book info + lecture video at

Stanford/Andrew Ng for course info see for video lectures see

The article mentions

does anybody know:

- How much work is required by week?

- Pre-requisite?

- Programming language used?

Other feedbacks?

I just finished the course last week. I really liked Andrew's teaching style and enjoyed it very much. The work load is light compare to on campus courses at my school. I strongly recommend anyone interested in ML/AI to take the course.
There's no prerequisites, Andrew manages to make all the maths self contained. He uses MATLAB/Octave, and the assignments can take around 8 hours or so of work.
I'm currently going through the course (at week 5 at the moment).

Work per week: can vary between 6 to 12 hours depending on the week and your background and willingness to dive into some demonstrations.

Pre-requisite: first, you definitely need to save some quality time otherwise you'll drop out (I heard the completion rate is around 10%). I feel that having a bit of matrix computations, algebra background helped. Some parts are easy, other parts are harder (but YMMV).

Programming language used: octave; while it's nice to work at that level of abstraction, I found the "feedback loop" very slow when you submit exercises, so in the end I used with to have a faster feedback loop (anyone can email me at [email protected] to get more details). This is especially useful when dealing with vectorization of computations.

Overall I found that the course is great for me (coming from an ETL, programming background with some solid maths exp at some point), and I'm learning quite a bit; a good introduction with a pragmatic viewpoint.

You can work through the Coursera variant of Andew Ng's course without a deep math background:

More in-depth videos of the course are on YouTube:

If you are interested in learning the basics of Machine Learning I really recommend Andrew Ng's course on Coursera[0]. It starts off very basic and requires almost no prior knowledge other than some linear algebra. It evolves to more advance topics like Neural Networks, Support Vector Machines, Recommender Systems, etc. It is amazing how much you can do after just a few weeks of lessons. It's a very practical course with real world examples and exercises each week. The focus is mostly on using and implementing machine learning algorithms and less so on how and why the math behind them works. (To me this is a negative, but I'm sure some people will appreciate that.)


It's part of the course URL when you're enrolled at the course home, e.g: -> algs4partI-010 -> machine-learning
The ML course has quite some time ago been made available as an untimed "always-on" course at

Are the quizzes etc free still? My impression from the post was that the new site has stripped access to the exercises unless you pay up.
I don't know about this course, in general you now have courses where the quizzes are no longer accessible unless you pay and courses where they still are.
I just started the Coursera Machine Learning course. I know that it's probably a bit under your skill level, but the second half of the class might give you some broad education about what's possible in the field of machine learning.
After taking Andrew Ngs course I highly recommend you take cs231n afterwards.

Assuming you know a little bit of calculus you'll be able to create and train fully connected networks, convolution networks, recurrent networks, and long short term memory networks using only numpy. You will feel very comfortable with deep learning after taking this class.

The lecture videos for the course got taken down due to complaints, but you can recover them via google

+1 to that. I'm on week 4 and every week apart from the first one I've learned interesting stuff that wasn't covered in my undergrad CS course.
I started this course to learn the maths, technical terms, approaches and 'philosophy' of machine learning. I'd highly recommend it even with the stats knowledge you have, you'll still get a lot out of it. I wanted to be able to understand articles about deep learning and this course allowed me to do that.

I recommend watching the videos at a higher speed at least, and you can skip ahead if you are not doing the course to get a validated grade, although I suggest working through the whole choose.

Oh yeah, Ng talks pretty slow. 1.25 is a minimum cruising speed and I often go faster.
I believe that the best way to learn ML is by first learning to program the algorithms and then learning the math. This is the opposite to what people is used to do, but I think it's better. The reason is because programming ML is easy, but the math behind it is very complex. I would suggest to start with scikit tutorial and later with Ng course Then a good book is Pattern Recognition and Machine Learning, from Bishop.
Helpful FYI: If you're interested in learning about Machine Learning (so you can use TensorFlow and Rescale, etc), I've found this to be an incredible resource:
If you specifically want to learn about tensorflow then you can enrol for this course by Google
To actually learn machine learning algorithms i found the coursera (stanford) course to be excellent. Minimal programming and math knowledge needed.

I also recommend Welch Labs’ Neural Network Demystified-series[0]. It is a combination of the Coursera course and the YouTube videos. It get’s into some of the math, while still keeping it basic.


The possibility of deep NLP suggest that there are tremendous opportunities for building a personal assistant. For Deep NLP, I would suggest course CS224D at Stanford on YouTube (going on now but also delivered last year:

For Deep NLP, you will need to be solid on linear algebra and machine learning. For introduction to machine learning, check out Andrew Ng at Coursera: and my favorite talks on Linear Algebra are the ones done by Gilbert Strang:

For web crawling, there are plenty of open source libraries. If you are not familiar with it, check out the Common Crawl: This is a great source of data to crawl.

If you focus solely on NLP and data from the Common Crawl (or even Wikipedia), then you will see where you stand as the smoke clears and you feel comfortable with the state of the art techniques.

Ignore all the naysayers. The good news is that it has never been easier to get started in deep NLP. Once you have the experience of training a model and seeing how well it works, you can decide on the next steps. Perhaps, you will find a niche that is not well-served that you can go after as a first step of a personal assistant.

Good luck.

If I didn't get this course[1], I wouldn't understand what you are talking about.

[1] -

Andrew Ng's Coursera course on Machine Learning:

There's also this course on Neural Networks by Geoffrey Hinton:

If you do the Andrew Ng course, you'll have to learn/know Octave (or Matlab). Otherwise, Python is very heavily used in ML these days and you could do a lot worse than learning Python. R and Octave are both very useful as well. And, as you note, there is a lot of machine learning software available in the JVM world as well. Scala in particular seems to be gaining some ground in this world.

Mar 25, 2016 · 2 points, 0 comments · submitted by olalonde
If you haven't already, you should check out the first and second week of Andrew Ng's Coursera Course on Machine learning. He exclusively talks about gradient descent the first few weeks.
Second the motion. Ang really explains gradient descent very well in that course.

As far as the equations go, if you don't know multi-variable calculus, you might not be able to follow the actual derivations, but I don't think that's all that crucial, depending on what your goals are. Certainly you can apply this stuff without knowing the calculus behind it. And in the ang course, he gives you all the derivations you need to implement gradient descent for various purposes.

Anyway, here's my quick and dirty, way too high level overview of the whole calc business:

All you're really trying to do is optimize (minimize) a function. Given a point on the graph of that function, you need to know which direction to move in in order to get a smaller (more minimal) output. To do that, you calculate the slope at that point. Calculating the slope at a point on a curve is exactly what calculus does for you. If you were working with only one variable, the derivations would be trivial, but once you get into higher dimensional spaces and the need for partial derivatives, that's where the calculus gets a little trickier. But in concept, you're always just doing the same thing... calculating the slope so you know where to move, and by how much (the steeper the slope, the bigger the hop you make in a given iteration).

Mar 06, 2016 · godzillabrennus on Q&A: Andrew Ng
Andrew Ng has a machine learning course online if you are interested:
Probably one of the best online courses available. He's very engaging and makes some pretty tough concepts seem straightforward.
I took the in-person course at stanford. I recall the first problem set in cs229 was about as hard as the entirety of the Coursera course combined...
(Disclaimer: I have only seen part of the material, for both courses) This seems true in terms of difficulty, mostly because CS229 assumes a much stronger math background and provides more of the theory for a lot of the ML techniques being used. However, if you want to know the "technique space" of usual ML algorithms and when/how to apply them, the Coursera course seems to be 70-90% of the way there in terms of breadth of the material (with a bias towards the most commonly used methods, so perhaps 70% of the techniques, applicable to more than 90% of the use cases).

So, if you want to do ML in industry, applied to common problems, the Coursera course will get you there. If you want to become an ML researcher, the Coursera course will fall way short of that, while CS229 might be at least a first step (followed by CS229T/STATS231 [1]).

[1] Lecture notes online, too:

It's notable that there are actually no enforced prerequisites for 229, besides basically being OK at math. Which meant that I knew an education major who sat down, tried to take it and sort of crashed and burned in the third week.
Are there any enforced requirements at Stanford? I don't think I ever had any formal review of the classes I signed up for making sure I actually met the requirements -- there must've been at least a handful of classes I had signed up for that I didn't have the official prereqs for.
Well, everyone can judge by themselves, the actual CS229 lectures are also available online:

EDIT: And so are the problem sets:

There is nothing to indicate the Coursera class and CS 229 are the same. CS 229 is a grad-level machine learning class that assumes heavy math prerequisites; the syllabus is completely different.

The Coursera class is closest to CS 229a at Stanford.

Fair enough, I just wanted to note that people can also watch the CS229 lectures online.
I've been taking this course recently. Due to time constraints, I haven't kept up with the rest of the course, but the content is phenomenal.

If you want to learn machine learning, check out the course.

Part of self-teaching may include taking formal online coursework voluntarily. For example I'm currently halfway through
Feb 29, 2016 · 3 points, 0 comments · submitted by e19293001

I see Andrew Ng's class is being offered again for the first time in a few years (, but other great classes aren't, like LAFF , Thiel's startup class etc.

Seems like a travesty that these incredible classes are created by our best minds, but then aren't continuously offered. (the materials are available for self-study, but not the graded homework, support forums, TAs, certificates of accomplishment etc.)

Coursera, the prof, the university, have no incentive to keep it going. Philanthropists should really fund continuing development of the best-of-the-best MOOCs so students can keep learning, best practices can keep improving.

There is no good central discovery and reputation for MOOCs. There needs to be a place I can go and see that e.g. LAFF is the most popular / highest rated for intro to linear algebra, and here's how much work it represents.

There are universities who start online programs, but what we really need is an online 'degree' grantor who can say, OK, you've taken these online MOOCs from various institutions, I can tell from the fact that you logged in with your finger, your device's camera, metrics like typing style, word frequency for essays, etc. that you actually did the work, these courses are worth X number of points, here's a 'degree' or a few numbers describing the quantity and quality of your work in various disciplines at various places.

And then the best of the best should be able to tap funds to be continuously offered.

It seems more like an object for a non-profit startup (should be self-sustaining but doesn't seem like the sort of thing that should be in IPO candidate or revenue-maximizing, will turn into University of Phoenix)

Andrew Ng's popular Machine Learning course goes over most of the topics in the slides:

Linear and logistic regression, gradient descent, clustering, support vector machines, bias and variance (one of the slides was taken from the course), neural networks, etc...

You know... I believe I started this class once and didn't finish it due to time constraints. I think it's time to try again...
I signed up for this last night. Really looking forward to it.

His machine learning course is amazing! I really enjoy lectures and I feel like I'm learning a lot. So far I have better understanding of some statistical concept way better than I ever have in the past. (even only third week into the course!)
Basic question(s) as I am not a data scientist but have just taken a machine learning course ( )

Won't trying different combinations of hyper parameters/lambda (over a small range) help us arrive better instead of manually tuning it? Or is that what the author meant by manual tuning?

Depends what you're tuning. If it's something like the number of trees in a random forest, definitely do that automatically. If it's the number of clusters in a clustering problem, that's where you'd be asking an expert something like "how many distinct groups of customers do you think we should try to split them into?" and go from there. But even in that scenario, the expert's opinion might just be your starting point for automatic tuning.
Author here: whether the hyperparameter tuning is done automatically or manually is not as important for what I was trying to say here. But yes, any of {grid-search, random-search, bayesian-optimization, etc} is likely to be more effective than manually tuning to squeeze out those last ounces of performance.
I'm not a data scientist per se, but I've been working with some (boss and co-worker) to get some stuff operationalized and into production, so I've been responsible for generating inputs, helping analyze/visualize outputs, and building linear optimization models, so I've got some very basic experience.

As I understand it, one of the pitfalls of automatic tuning is that it becomes hard to account for seasonality and you will likely end up with useless parameters - for instance a customer ID is rarely a good parameter to optimize on, even as a categorical variable, except in very specific cases. It is probably a proxy variable for one or more other ones that you need to tease out of the rest of the data.

(warning, potentially me talking nonsense coming up) Automatic tuning is no substitute for a talented analyst who knows the data well and understands the goal. But if you've got hundreds to millions of parameters, you may not have another choice really.


Andrew Ng's ML Class -

Daphne Koller's PGM Class -

Dan Jurafsky's and Christopher Manning's NLP Class -

Nov 20, 2015 · 3 points, 0 comments · submitted by unsignedint
To install

They have some general walkthrough tutorials

and a nice overview on machine learning. you will need to setup an account. It is free and very informative.

hundreds of resources for machine learning can be found here


The guide's primary recommended course is Andrew Ng's Machine Learning course. Current session started November 2nd, you must enroll by the 7th. Another session is starting November 30th.


It's not time-sensitive at all. After the course has ended, you can still enter the course and do everything; you just won't get the certificate (which doesn't matter).
What exactly is the appeal of the certificate? Do they hold any weight?
Precomitting to finish the course for the certificate is a good motivational hack
Good point, people should know that.

What I had in mind is that some people get a lot from the the community features on Coursera, more active while a class is in session. So that's all I meant.

I'm about to start the assignment for the 5th week of Machine Learning on Coursera
That's awesome! I hope to go through that course soon. I just finished 2nd week assignments of Algorithms- Design and Analysis on Coursera -
I have recently wrote an article collecting the best AI resources:

Specifically, I would reccommend AIMA as the best introduction to AI in general, and a fantastic video course from Berkeley:

and also Andrew Ng's course on coursera:

For neural networks there's an awesome course by Hinton:

and UFLDL tutorial:

A new session of Andrew Ng's Machine Learning class (a version of his Stanford class) starts at Coursera on Monday. It's a quite manageable introduction to the field, with some hands-on programming involved. There's just the right mix of math and theory in there, with some refresher material for some of the math.

It's as good a place to start as any, and the benefit of a scheduled class is that you'll have a community doing the same work at the same time to help you out.

Thanks for this. Incidentally, the second paper you link to is co-auhored, among others, by Daphne Koller, who teaches [this great course on probabilistic graphical models]( and Andrew Ng, who teaches [the best-known intro MOOC in machine learning](
I am learning the course of Machine Learning [1] at Coursera. I didn't know he co-founded Coursera. Can't believe this awesome course is free. Andrew Ng is really a good teacher. Thank you Andrew Ng and Coursera.


I'm taking his machine learning course [1] and it's absolutely fantastic. It's one of the mondern wonders of the internet that you can have such a tutor for free. Plus it's fascinating to see what people go on to do after the course [2].



I've heard that the course is diluted compared to the brick-and-mortar school one, is this true?
For those that don't know-- although Octave is interpreted, the interpreted part acts as an interface to highly optimized vector operations under the covers. Signal processing and a lot of other tasks come down to manipulating large arrays of data using similar operations across the entire array. Although Octave is interpreted, these kinds of operations execute very nicely inside Octave. Octave also lets you write those kinds of operations at a very high level, like a = b * c, and under the covers it's doing millions of operations.

If you'd like to give Octave a try and learn about Machine Learning at the same time, try this excellent and fun free course!

One of the later lessons in that class is a mind-blowing signal processing thing. In that exercise, they show how you can put multiple mics in a room, with multiple people talking over each other, and the computer separates out the different sources of sounds, and outputs a clean wav file for each sound source with the sounds isolated from each other. It blew my mind, and was implemented in Octave. And they teach you exactly how to do it yourself.

All the above concepts can, of course, be done in C++ or NumPy or whatever, they just use Octave to teach the course.

I don't believe that exercise is part of the course anymore.
That's really sad. I was hoping to something similar to that. Do you happen to have any of those materials around?
Here are a couple of resources to get you started. Looking at these might give you ideas of what else to search for. Signal Processing for Machine Learning - YouTube (even though this is for marlab, octave is designed to be compatible)

If that's not cool enough, GNU Octave is available for Android devices.

The group is held by Prof. Ng which has a ML course on coursera - lecture notes of the in class course are here (but without the lecture it may be.. without the lecture:))
Definitely worth it. I had to drop out around week 5 due to a lack of time, but just started a new job that'll take me back to 40-hour weeks (down from 80-100), so I'll give it another shot next time they run the course.
The latest session started today, from 5 Oct to 27 Dec.
Ng is one of those great profs that combine the rare gifts of being able to teach as well as be a leader in the field at the same time. I've had profs that are far less accomplished and far more arrogant. There's probably a correlation between those features, now that I think about it.
For anyone interested, I have been really interested in deep learning and have been using the following resources:

For image processing (CNN)

For natural language processing (RNN)

I also found the following coursera helpful

Thank you, a lot of the papers in DL/Sparse Coding aren't as clear as the sources you shared.
Does anyone know how to access past courses on coursera if you didn't sign up in a timely manner?
It depends on the course, some courses are kept available and others are shut down after the action is over. I think the choice is left up to the instructors. I started downloading all of the videos when I sign up for a course for this reason actually, because I tend to go through them too slowly to finish with the main group.
I think you just sign in and click on a big green button "Go to Course" on the right - at least it worked for me with neuralnets course by Hinton, which is really great.
Thanks. I really appreciate it when folks like you on HN post learning materials in the comments.
Thanks, i will see it, seems very interesting.
These are great resources. There is also a ML class offered by Georgia Tech at Udacity under the OMSCS program.
Here might be a good place to start learning:

Much of what research is, is understanding what you're working on in the first place. Getting a solid foundation is a must, or you risk being very lost for some time, and will be fundamentally limited in what you can contribute.

learning from that awesome course. on week 6 now :)
I think you are talking about Andrew Ng's course.

I completed it and can't recommend it more highly. It is a really excellent, dense course and Ng is a very good teacher.

I've completed the course as well - have you used any of the knowledge from it on anything in particular after you completed the course?
I'm taking the Coursera course right now. The course page at Stanford has a lot of student projects. The breadth of applications is pretty huge, definitely worth a check if you're looking for an idea.

Geoffrey Hinton's archived course is all about neural nets, I think you can enroll in the archived version, no code, just theory.

Sep 09, 2015 · pigs on Free Data Science Books
This course is great: It's all done in GNU Octave, which is mostly compatible with MATLAB.
If this feels like jumping in the deep end, I highly recommend Stanford's online course on this subject:

I agree. Andrew Ng's course is a great place to start with ML. We've got a list of great online ML/DL courses here:
Sep 06, 2015 · 2 points, 0 comments · submitted by Michie
I recommend this course for a general overview on machine learning:

There are also some good resources in edX...


I've worked for 4 startups and on several projects doing web development. I was also Community TA for the Startup Engineering [1] class and for the Machine Learning [2] class at Coursera (Stanford).

I work mostly as a backend engineer and occasionally fixing and writing some Javascript on the frontend (jQuery, Backbone.js). I also can oversee backend development by doing project management and issues and tasks coordination.

I use a methodology for each project like setting up a deployment process/git branching model (development, staging, production), etc., and I'm very pragmatic about researching and using proven solutions (ie: code) to each problem. I code in Python: Django, Tornado, GAE and node.js: Express. Git for source control (Github/Bitbucket). Linux, vim.

Drop me a line: [email protected]



I really have to take exception to this (characterizing MOOCs as some sort of useful-only-for-signalling high-tech "vocational training" for the privileged).

I've been working through Andrew Ng's Machine learning class ( on Coursera, and I think it's wonderful. It begins a bit slow-paced, and I definitely supplement the lectures with readings and the class notes for CS 229 ( that broadly match the content of the online course, but I absolutely love the course.

The lectures are well-paced and have 'in-line' quizzes to test your understanding as you go. The problem sets are easy to do at any time, with optional components and instructions that let you understand the material better if you want to deep dive. The course itself is available to start whenever you choose, which means barrier for entry is very low.

MOOCs may not be everything that they first promised, but they definitely win in terms of the sheer accessibility of content and the flexibility of the format. I would not be learning machine learning right now if the only available format was dead-tree or (worse) putting together information piecemeal from blog posts.

I don't think this is much of a response to the piece's argument. Yes, high-level tech classes are a use-case where MOOCs work. But MOOC companies have repeatedly presented them as useful for intro-level and non-technical courses--and they aren't.

It's great if MOOCs help you with high-level tech stuff. But that's not what most students want or need. Someone in a low-level math or composition class is likely to need far more interaction with their instructor (which is what I take the article to be arguing).

I have a folder to bookmark machine learning resources.

Here's another good one from the creator of coursera (Stanford grad I think)

This course is taught by Andrew Ng. Professor Ng is not only one of the founders of Coursera, but is also a prof at Stanford and Chief Scientist at Baidu. Machine Learning (and Deep Learning in particular) is his specialty, so he is a pretty good resource on the topic :)
Great I'll give it priority then in my bookmark!
Make sure you have

Its Bengio's (very well known deep learning researcher) upcoming textbook. I would highly recommend to anyone interested in deep learning/neural net subset of machine learning.

[1] -- First, you need to learn machine learning(ML) basics. Andrew Ng's course on Coursera is a good start:

It doesn't teach you ML with Python but it is extremely important to learn the ML concept without any programming language in mind. In addition to that course, any Google search will help you a lot. There are a lot of good explanations of ML concepts on various websites. If you don't understand how algorithms work, you will end up with copying and pasting example codes without knowing what you're doing. You need to imagine what you want to do in your head before you type any letter.

[2] -- Once you have the initial introduction, you can use Python to implement ML concepts. Fortunately, Python has a very easy to learn ML package: Scikit-learn ( It's free and is used by various companies such as Spotify and Evernote. Scikit-learn has a great documentation and many examples that will make the whole learning process exciting.

[3] -- After you feel comfortable with ML in Python, if you don't have datasets of your own, you can find a lot of datasets on UC Irvine's machine learning repository:

The more you practice, the more comfortable you feel with playing with data. To cover a ML technique very well, play with every single parameter of the scikit-learn functions of that technique by using the same dataset. Also, always try to include visualization of the data (scikit-learn has examples with matplotlib to learn from how to do it) so you can actually see the changes of the implementation when parameters of the function change. This will make everything a lot easier.

Good luck!

Machine Learning is a sub-field of computer science and an area of intense current research. It has nothing to do with Python (a programming language) except that some machine learning algorithms might be implemented in Python.

You might find Andrew Ng's Stanford Coursera course a good place to start.

Well, you could certainly argue that python has the best machine learning libraries. So it's at worst a relevant programming language to the field.

edit: +1 to Andrew Ng's Coursera course though!

Was there a reason Ng decided to teach the course in Octave rather than Python? The only time I've ever used .m files was during the course.

I appreciated the challenge of thinking from an array based language, but I felt it held me back from directly comparing my solutions to the tutorials to external sources. (Unless that even in of itself was the reason).

"Introdocution to Statistical Learning" by Trevor Hastie et al. [1] They have a free online class through Stanford [2] Sign in to their system and you can take the archived version for free.

ISL is an excellent, free book, introducing you to ML, you can go deeper, but, to me this is where I wish I'd started. I am taking the Data Science track at Coursera (on Practical Machine Learning now) and I am kicking myself that I didn't start with ISL instead.

Now, I know you specifically asked about Python, but the concepts are bigger than the implementation. All of these techniques are available in Python's ML stack, scikit-learn, NumPy, pandas, etc. I don't know of the equivalent of ISL for Python, but if you learn the concepts and you're a programmer of any worth, you will be able to move from R to Python. Maybe take/read ISL, but do the labs in Python, that might be a fun way to go.

Lastly, to go along with ISL, "Elements of Statistical Learning" also by Hastie et al is available for free to dive deeper [3]

[1] --

[2] --

[3] --

I also think this is one of the best entry level books, and the Stanford course looks good. This is what I recommend to people. In some ways, R is a very good match for this material, and you could move to python later.
I started with Andrew Ng's Machine Learning course on Coursera [1]. He presents the entire subject, including NNs and prerequisite theory, in a non-intimidating, intuitive fashion. There are simple coding exercises, such as digit recognition using NNs.

Once you're familiar with the basics, you can go deeper into the subject with the books suggested here.


I think you could play around as a hobby. You might try Theano as a place to start (for LSTM: If you become passionate about neural networks you might find yourself in grad school simply because that's a great place for diving in more deeply. It's really really helpful to know machine learning. Andrew Ng's Coursera is a great place to start:
I'm doing the Coursera machine learning course. You guys can try it too : . It's very relevant to this topic. Good explanation is presented here too.
There is a nice book by my profs (Y. Bengio, A. Courville) and a former student of the lab (I. Goodfellow) here: The chapter on RNNs is pretty enlightening. You probably want to start with something the goes through feedforward networks, convolutional, etc. first though.

Andrew Ng's Coursera course does a super basic NN in MATLAB which might be good to shake the rust off [1]. Hugo Larochelle's youtube course [2], Geoff Hinton's Coursera course [3], and Nando's youtube Deep Learning course [4] are all very good. There are also some great lectures from Aaron Courville from the Representation Learning class that just finished here at UdeM [5]

If you want to do this for real (not self teaching RNN backprop on toy examples) tool-wise you generally go with either Theano (my preference) or Torch - implementing the backwards path for these (and verifying gradients are correct) is not pretty.

Theano does it almost for free, and Torch can do it at the module level. Either way you definitely want one of the higher order tools!






Thanks for the references!
Someone linked to a coursera course on machine learning in that thread, but the URL seems to have changed. It's now found at

"Andrew Ng's Standford course has been a god send in laying out the mathmatics of Machine Learning. Would be a good next step for anybody who was intrigued by this article."

I don't remember Andrew Ng's coursera class giving a satisfying introductory mathematical treatment. I remember frequent handwaving away of the calculus intuition in favor of just dropping the "shovel-ready" equations into our laps so that we could do the homeworks. If you wanted a better treatment you had to dig it up for yourself (which wasn't too hard if you visited the forums but still).

Has it been supplemented since then?

thanks! I am currently doing the Andrew Ng one (, saw a Geoff Hinton talk on youtube and didn't even know about this course.
I think these courses are awesome as well:

6006 Introduction To Algorithms from MIT

Machine Learning from Stanford: Learn about the most effective machine learning techniques, and gain practice implementing them and getting them to work for yourself.

My main aim with this list was to have a collection of lesser known (but awesome) courses. That's one reason why I stayed away from adding MIT's OCW or a MOOC on the list.
awesome! Yeah the list is super cool!
My guess is that you won't find any course that explains all the prerequisite math. It's probably more useful to build a solid foundation in probability theory (and therefore calculus) before going on.

For machine learning, a good place to start is Andrew Ng's course on Coursera:

It's pretty light on math, while at the same time giving you experience in implementing and understanding these techniques.

From there, I might recommend Learning from Data and the associated video lectures:

It is a bit of a jump, but it is a great course in presenting the field of machine learning and explaining the mathematical and statistical underpinnings in a systematic way.

I just finished the Coursera course by Andrew Ng. It was great. The only hand waving done with math was when calculus was necessary. You can take some extra time to do that work yourself if you like, but you will not be missing the underpinnings of why things work statistically. The introduction to neural networks what finally gave me that aha moment.

It is a very self contained course that is quite easy to follow. You can skip the programming exercises if you don't have the time.

For anyone interested in more about the specific math of neural networks, has a couple good introductory chapters that give overviews of most of the necessary topics for NNs, but also provides additional resource suggestions if you need more in-depth info on a certain subject.
I think you should start working on your Math (Khan academy courses) and ML foundations (Andrew Ng's coursera course). Then Geoffrey Hinton's coursera course on Neural networks could be a gentle introduction to Neural networks, deep learning and their applications. Last but not least, do a small project on Deep learning or try out few kaggle competitions to deepen your understanding.


I will try to list resources in a linear fashion, in a way that one naturally adds onto the previous (in terms of knowledge)


First things first, I assume you went to a highschool, so you don't have a need for a full pre-calculus course. This would assume you, at least intuitively, understand what a function is; you know what a polynomial is; what rational, imaginary, real and complex numbers are; you can solve any quadratic equation; you know the equation of a line (and of a circle) and you can find the point that intersects two lines; you know the perimiter, area and volume formulas for common geometrical shapes/bodies and you know trigonometry in a context of a triangle. Khan Academy website (or simple googling) is good to fill any gaps in this.


You would obviously start with calculus. Jim Fowlers Calculus 1 is an excellent first start if you don't know anything about the topic. Calculus: Single Variable is the more advanced version which I would strongly suggest, as it requires very little prerequisites and goes into some deeper practical issues.

By far the best resource for Linear Algebra is the MIT course taught by Gilbert Strang If you prefer to learn through programming, might be better for you, though this is a somewhat lightweight course.


After this point you'd might want to review single variable calculus though a more analytical approach on MIT OCW as well as take your venture into multivariable calculus

Excellent book for single variable calculus (though in reality its a book in mathematical analysis) is Spivaks "Calculus" (depending on where you are, legally or illegally obtainable here (as are the other books mentioned in this post)). A quick and dirty run through multivariable analysis is Spivaks "Calculus on Manifolds".

Another exellent book (that covers both single and multivar analysis) is Walter Rudins "Principles of Mathematical Analysis" (commonly referred to as "baby rudin" by mathematicians), though be warned, this is an advanced book. The author wont cradle you with superfluous explanations and you may encounter many examples of "magical math" (you are presented with a difficult problem and the solution is a clever idea that somebody magically pulled out of their ass in a strike of pure genius, making you feel like you would have never thought of it yourself and you should probably give up math forever. (Obviously don't, this is common in mathematics. Through time proofs get perfected until they reach a very elegant form, and are only presented that way, obscuring the decades/centuries of work that went into the making of that solution))

At this point you have all the necessery knowledge to start studying Differential Equations

Alternativelly you can go into Probability and Statistics


If you have gone through the above, you already have all the knowledge you need to study the areas you mentioned in your post. However, if you are interested in further mathematics you can go through the following:

The actual first principles of mathematics are prepositional and first order logic. It would, however, (imo) not be natural to start your study of maths with it. Good resource is and possibly

For Abstract algebra and Complex analysis (two separate subjects) you could go through Saylors courses (sorry, I didn't study these in english).

You would also want to find some resource to study Galois theory which would be a nice bridge between algebra and number theory. For number theory I recommend the book by G. H. Hardy

At some point in life you'd also want to go through Partial Differential Equations, and perhaps Numerical Analysis. I guess check them out on Saylor

Topology by Munkres (its a book)

Rudin's Functional Analysis (this is the "big/adult rudin")

Hatcher's Algebraic Topology


It is, I guess, natural for mathematicians to branch out into:

[Computer/Data Science]

There are, literally, hundreds of courses on edX, Coursera and Udacity so take your pick. These are some of my favorites:

Artificial Intelligence

Machine Learning

The 2+2 Princeton and Stanford Algorithms classes on Coursera

Discrete Optimization

Convex Optimization


Dec 07, 2014 · bennetthi on Statistical Learning
I found the Stanford ML class on Coursera ( really amazing. Although it uses Octave, not R.
In red is your model whereas in green is the real one, M being the number of parameters.

The technical term for the last one is "overfitting" if I remember correctly. But in the case you have an enormous amount of data, it is unlikely to happen.

It reminds me of this awesome course:

edit: The parent's parent's parent mention overfit for the MIT work, I don't think it'd be the case if you have that amount of data in hands

It's entirely possible to overfit with enormous amounts of data. As people are now creating models with enormous numbers of parameters.
Here's another way to think of it.

If the parameter space for my model includes, let's say 10 binary decisions (which is very conservative), that's 1024 possible states of my model. If I tested all 1024 states against historical data, it is likely that some of them might do very well (depending on the general architecture of the model of course). What if I then selected the successful minority and held them up as clever strategies? Their success would very likely have been arbitrary. By basically brute-forcing enough strategies, I will inevitably come across some that were historically successful. But these same historically successful strategies are unlikely to outperform another random strategy in the future. It's not impossible you'll find a nugget of wisdom hidden from everyone else, just much less likely than the more simple explanation I'm offering.

So to your point, it's not just the size of the parameter space versus the data set that matters. Brute-forcing the former alone will likely produce a deceptive minority of winners.

There is a fun chapter on this topic in Jordan Ellenberg's latest book "How not to be wrong". It's called the "Baltimore stockbroker fraud".
Sep 22, 2014 · 4 points, 0 comments · submitted by thrush
Do this Coursera course, targeted towards beginners and its taught by Andrew Ng (Co-founder of Coursera) -
The videos are available here:

I think it is worth mentioning that "beginners" should be understood as beginners in machine learning and not beginners in computer science. Ng's course more or less assumes students have the equivalent of an upper division undergraduate or graduate background in computer science.

While it may be a valuable learning experience for a person who isn't comfortable knocking out algorithms, academic success will be rather difficult to achieve.

There is a nice tutorial here [1] about max-ent classifiers. Ultimately in neural networks there are usually a number of cost functions you can use for the last layer - softmax or cross-entropy are other possibilities that may be easier to understand, though possibly less performant for NLP tasks.

I thought the introduction to neural networks from Andrew Ng's coursera course (even though it meant writing MATLAB) was quite good, and allows you to implement backprop, cost functions, etc. while still having some other helper code to make things easier. I highly recommend working through that course if you are intersted in ML in general [2].



Textbooks? Really?

How about start with a great lecturer like -

Nando de Freitas -

David Mackay -

or the (sometimes too dense) Andrew Ng -

Its because textbooks are usually more in depth than lectures can go due to time considerations, this is why any graduate program is 90% papers and textbooks.

A great example of this is Andrew Ng's course, even though he is the co-inventor of LDA (complicated Bayes network) he does not explain Bayesian analysis in his course.

Not sure how you saw the books and missed the explanation, but

"But why textbooks? Because they're one of the few learning mediums where you'll really own the knowledge. You can take a course, a MOOC, join a reading group, whatever you want. But with textbooks, it's an intimate bond. You'll spill brain-juice on every page; you'll inadvertently memorize the chapter titles, the examples, and the exercises; you'll scribble in the margins and dog-ear commonly referenced areas and look for applications of the topics you learn -- the textbook itself becomes a part of your knowledge (the above image shows my nearest textbook). Successful learners don't just read textbooks. Learn to use textbooks in this way, and you can master many subjects -- certainly machine learning."

I saw the explanation and I don't buy it. In pretty much every occasion that I've found a textbook valuable, I've found lectures by the authors far more valuable.

David Mackay's lectures are incredible and go much further in explaining the material in an understandable fashion than his excellent book "Information Theory, Inference and Learning Algorithms".


The article addresses this almost immediately.

It's fine to disagree, but crapping a 'Really?' plus some contextless links onto someone who put forth general reasoning for the nature of the recommendations and spent 1300+ words describing expectations, key takeaways and projects for those recommendations is just lame.

On a given topic, I believe the best textbook is an inferior medium to the best lecture. Rather than blathering for 1300 words, I provided links to some excellent machine learning lectures.
You sound like you've had some bad experiences with books. If you learn better with lectures, great. However, it's not for everybody -- I personally think spending time away from the computer (until I'm programming something), with a book, paper and a pen is very good time spent.
But time spent with a computer is time efficiently spent. You can't say that about books.
I read textbooks on my computer actually. Not by choice though, textbooks are difficult to find and too expensive.

I used to hate lectures when I was in school but now I sort of prefer them. It's easier for some reason. It's more passive; you just sit there and listen rather than actively read. It doesn't seem to be slower like others complain, and may even be faster. I read difficult texts very slowly and methodically, and often have to reread stuff.

Coursera has a nice course on Machine Learning[0] and the 4th and 5th week deal with Neural Networks specifically if anyone wants to learn more and get his hands dirty with octave/matlab code.


I'm in that course! What's interesting is comparing it to the Stanford AI course from a few years ago. The professors have very different approaches.
Nice! I've seen this and will add it to my list :)
Yeah I'm doing that right now. I find it relatively easy, it's a good introduction course, gives a good overview, but you probably want to take a follow-up course that builds on top of that one to really get into ML. (I'm aiming for the Neural Networks course on Coursera, and then I want to look into decision trees and Bayesian networks)
Mar 04, 2014 · 2 points, 0 comments · submitted by cryptolect
There's a great Machine Learning course up on Coursera from Stanford:
Some I can recommend that are still available on Coursera:

- Introduction to mathematical thinking [1]

- Introduction to Mathematical Philosophy [2]

- Machine Learning (actually a CS course, but involves linear algebra and some calculus) [3]

- Calculus: Single Variable [4]





Many thanks!
You can learn a lot about machine learning from this course
OFFER TO VOLUNTEER - Machine Learning, Artificial Intelligence, Scientific and Open Source projects

I'm an engineer with experience working for startups doing web development. Currently I'm one of the Community TAs for the Startup Engineering class and for the Machine Learning class at Coursera (Stanford).

I'm looking for opportunities to volunteer, preferably Machine Learning, Artificial Intelligence or Scientific projects. Being a Community TA has been a great experience and an opporunity to get a deep understanding on these topics. I'm eager to contribute to scientific, open source projects and the like.

Drop me a line: [email protected]

Startup Engineering:

Machine Learning:


ML code:

Machine Learning. I thought it was a really tough class and took more time than they say (they say workload: 5-7 hours/week -- maybe if you are perfect and your code never has bugs that you need to spend time debugging -- the course is based on programming assignments in Octave where you have to demonstrate mastery of machine learning concepts), but I put in a lot of extra time, mastered everything, finished with a 100. Andrew Ng is a top-notch teacher, even though his speaking style is very low-key.
OFFER TO VOLUNTEER - Machine Learning, Artificial Intelligence, Scientific and Open Source projects

I'm an engineer with experience working for startups doing web development. Currently I'm one of the Community TAs for the Startup Engineering class and for the Machine Learning class at Coursera (Stanford).

I'm looking for opportunities to volunteer, preferably Machine Learning, Artificial Intelligence or Scientific projects. Being a Community TA has been a great experience and an opporunity to get a deep understanding on these topics. I'm eager to contribute to scientific, open source projects and the like.

Drop me a line: [email protected]

Startup Engineering:

Machine Learning:


Me too. If anyone is offering something then do contact me as well!
pknerd , I tried to reach you on linkedin but email me at [email protected]
I'm a student in the Stanford machine learning class at the moment and wanted to take a moment to say thank you for all the contribution that the community TAs have put in to help us out!
Nov 14, 2013 · kot-behemoth on Deep Learning 101
Link for the impatient Looks great indeed!
Nov 14, 2013 · mbeissinger on Deep Learning 101
Definitely a solid foundation in linear algebra and statistics (mostly Bayesian) are necessary for understanding how the algorithms work. Check out the wiki portals ( and ( for overviews of the most common approaches.

Also, Andrew Ng's coursera course on machine learning is amazing ( as well as Norvig and Thrun's Udacity course on AI (

Yes, I'm taking it now.


There's also a more sophisticated course on ML by Hinton: Have you tried it as well?
I'm browsed it a bit. I'm hoping they will offer it again.

I've been following the self paced AI class in Udacity

OFFER TO VOLUNTEER - Machine Learning, Artificial Intelligence, Scientific and Open Source projects

I'm a Civil Industrial Engineer ( with some experience working for startups doing web development. Currently I'm one of the Community TAs for the Startup Engineering class and for the Machine Learning class at Coursera (Stanford).

I'm looking for opportunities to volunteer, preferably Machine Learning, Artificial Intelligence or Scientific projects. Being a Community TA has been a great experience and an opporunity to get a deep understanding on these topics. I'm eager to contribute to scientific, open source projects and the like.

Drop me a line: [email protected]

Startup Engineering:

Machine Learning:

If you're interested in Machine Learning, the outstanding Coursera course on machine learning just started a couple of days ago. It covers a variety of machine learning topics, including image recognition. The first assignment isn't due for a couple of weeks, so it's a perfect time to jump in and take the machine learning course!

Sep 15, 2013 · tga on A Course in Machine Learning
If you are interested in this you might want to also look at Andrew Ng's (Stanford) Machine Learning course that is starting soon on Coursera.

Here's a comment which I had written earlier on another article. The context was about learning ML with Python - of course the objective of Hal is more generic but some parts of it apply here too.

The book details building ML systems with Python and does not necessarily teach ML per se. It is a good time to write a ML book in Python particularly keeping in mind efforts to make Python scale to Big Data [0].

What material you want to refer to is entirely dependent on What you want to do?. Here are some of my recommendations-

Q : Do you want to have an "Introduction to ML", some applications with Octave/Matlab as your toolbox?

A :Take up Andrew Ng's course on ML in Coursera [1].

Q : Do you want to have a complete understanding of ML with the mathematics, proofs and build your own algorithms in Octave/Matlab?

A : Take up Andrew Ng's course on ML as taught in Stanford; video lectures are available for free download [2]. Note - This is NOT the same as the Coursera course. For textbook lovers, I have found the handouts distributed in this course far better than textbooks with obscure and esoteric terms. It is entirely self contained. If you want an alternate opinion, try out Yaser Abu-Mostafa's ML course at Caltech [3].

Q : Do you want to apply ML along with NLP using Python ?

A : Try out Natural Language Tool Kit [4]. The HTML version of the NLTK book is freely available (Jump to Chapter 6 for the ML part) [5]. There is an NLTK cookbook available as well which has simple code examples to get you started [6].

Q: Do you want to apply standard ML algorithms using Python?

A : Try out scikit-learn [7]. The OP's book also seems to be a good fit in this category (Disclaimer - I haven't read the OP's book and this is not an endorsement).









Is this worth going through over picking up a textbook or two? I've found that Coursera courses are actually quite bloated. Lots and lots of empty talking, and very little substance.
For a beginner to machine learning I'd recommend Andrew Ng's course notes and lectures over any textbook I've seen. But I prefer his Stanford CS 229 notes to Coursera for exactly the reasons you state: they are watered down. After you really can understand Andrew Ng's course notes I'd recommend a textbook because they go in more detail and cover more topics. My two favorites for general statistical machine learning are:

* Pattern Recognition and Machine Learning by Christopher M. Bishop

* The Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani and Jerome Friedman

Both are very intensive, perhaps to a fault. But they are good references and are good to at least skim through after you have baseline machine learning knowledge. At this stage you should be able to read almost any machine learning paper and actually understand it.

Isn't Murphy's book more up to date and comprehensive as a reference?

Edit: Andrew Ng's Coursera course is CS229A (, not really watered down.

I'm a big fan of Murphy but its comprehensiveness means you lose some detailed explanations. Bishop really gets at those details (so does EoSL).
Depends on the course, but other a than a select few, I tend to agree. Not the case with edX courses, though, I think they are up to a much higher standard. (At least the programming/math ones).
I've taken the Coursera ML class. It's very easy if you have the adequate math background. And it's not very comprehensive, there are lots of machine learning methods that are not covered. So it's more like an introductory course to machine learning.

But it's absolutely commendable how Andrew Ng takes the topic to such an understandable level that a clever high schooler who knows a little about programming could take the course. There's even extra videos serving as a crash-course to linear algebra and octave programming in the first week.

So he really manages to make the topic accessible to a very large audience.

>adequate math background

Do you know what kind of math is needed other than linear algebra.

Some calculus and linear algebra. The majority is linear algebra.
Haven't done the course, but from the preview of the videos, they cover the math you need and it's just basic, high-school level knowledge of linear algebra.
Basic vector and matrix operations. The first half of a typical freshman linear algebra course is more than enough. But like I said, there is a matrix review in the beginning, so if you're willing to study those extra lectures, then almost no prior knowledge is needed.

Also being able to take derivatives helps in a couple of places, but is not necessary.

Thats linear algebra.
Exactly ;) Damn I should have paid more attention in
The book details building ML systems with Python and does not necessarily teach ML per se. It is a good time to write a ML book in Python particularly keeping in mind efforts to make Python scale to Big Data [0].

What material you want to refer to is entirely dependent on What you want to do?. Here are some of my recommendations-

Q : Do you want to have an "Introduction to ML", some applications with Octave/Matlab as your toolbox?

A :Take up Andrew Ng's course on ML in Coursera [1].

Q : Do you want to have a complete understanding of ML with the mathematics, proofs and build your own algorithms in Octave/Matlab?

A : Take up Andrew Ng's course on ML as taught in Stanford; video lectures are available for free download [2]. Note - This is NOT the same as the Coursera course. For textbook lovers, I have found the handouts distributed in this course far better than textbooks with obscure and esoteric terms. It is entirely self contained. If you want an alternate opinion, try out Yaser Abu-Mostafa's ML course at Caltech [3].

Q : Do you want to apply ML along with NLP using Python ?

A : Try out Natural Language Tool Kit [4]. The HTML version of the NLTK book is freely available (Jump to Chapter 6 for the ML part) [5]. There is an NLTK cookbook available as well which has simple code examples to get you started [6].

Q: Do you want to apply standard ML algorithms using Python?

A : Try out scikit-learn [7]. The OP's book also seems to be a good fit in this category (Disclaimer - I haven't read the OP's book and this is not an endorsement).









For the curious, I think nothing beats the introduction to ML class from Stanfords Andrew Nq [1]. He lectures and explains with a clarity and consistency I don't often see.


Taking this course right now. Awesome.
I really want to take the course -- but I wish there was a textual version of it. I strongly strongly prefer text to audio/video.

Also, the voice of some of these lecturers have this sort-of monotone to it, that has the tendency to let you mind wander off. They're just not "arresting" enough.

For instance, I took the Crypto I class part-way on Coursera, and had this experience. The instructor voice was slow, drawn-out and kind-of put you to sleep. I actually downloaded the videos and just played it on VLC at 1.25x or 1.5x the speed (because he spoke so annoyingly slow).

On the other hand Tim Roughegarden (I think that's his name), who teaches an Algorithms class on Coursera, has an amazing "video personality". Just the way he speaks -- it catches your attention. He passion and enthusiasm for the topic really come across. Now, I'm not saying the other professors aren't as passionate about what they teach -- but it's just that some of these lecturers have a really good way of bringing it through (their love for the topic) on video. Not everyone can (or is) doing it.

You get annotated as well as original PPT-slides along with clear text transcripts of what he says in the videos. Can be a bit awkward as it is not a textbook text but it gets the job done. I honestly think it is hard to do a better course than what you get from Ng's Machine learning on Coursera.
That's awesome to hear! I'm taking the Coursera class now, and it's been great so far. It just started a couple weeks ago, so it definitely isn't too late to join!
I'm also taking the class. Just finished the logistic regression programming assignment. Great stuff. You can take those algorithms and kind of add a bit of magic to your software.
thanks very much for pointing this out, joined the class now, hope i can catch up
Haha me too late to the party. I just joined. Frantically watching the video lectures since the assignments are due today (hard deadline)!
Okay, now you got me scared. The hard deadlines for all my assignments are on July 8th 8:59AM (that's CEST, so it's probably July 7th in PST.)
I took this one and it started on Apr 22 and ends around July 1st I think. So not sure what course you are talking about.
Well, this is weird. I'm taking the exact same course and here's what I see for the first programming assignment: (same with review questions)
Sorry my bad. I was under that assumption since I saw that the hard deadline for review questions were today, so I thought that must hold for the programming assignment.
Keep it going! I attended the very first one (~75%), and then did a second time last year. It was then easy to get to a perfect 100% score, having done most of the exercises during the first time.

It is one of the best Coursera classes. I had a blast, and strongly recommend it. I decided to continue learning ML, mostly because Prof. Ng.

I wish Coursera followed the Udacity model. I always find out about these classes after they're already weeks in progress or over.
Unless you're attached to getting a certificate of completion, you can pretty much follow the Udacity model. As long as the course hasn't finished, sign up and get around to the videos and assignments when you get to them. There isn't the same discussion forum interchange, and your homework isn't graded, but they don't drop the class from your list even if you do nothing during the run.
Note that you'll want to be careful to cache the materials offline if you do this, especially if you plan on "catching up" after the formal end date for the class. Some of the courses (notably the Princeton algorithms ones) disable access to the materials once the official course ends.
Check out for a list of all current and upcoming classes. Coursera also lets you star a specific class and get notified if they are repeated in a new cycle.
You can star any Coursera class to receive notifications whenever new sessions are announced.

Also, I believe it's still possible to join the current session (first assignment was due this weekend, but you can turn it in late with just a 20% penalty.)

I take new courses at any time, even if they have ended. Later on, when they recycle, I can do them all over again with ease.
Andrew Ng's course on Machine Learning ( has rave reviews whenever I see it mentioned. It just started again a few weeks ago and I had hoped to join this time, but other commitments made that impossible.

As for the other part of your question...You may as well be asking if it's too late to research 'science'. Machine Learning may have been studied for a fair few years now, but it is still very much in its infancy. The possible developments in this area we can't yet imagine dwarfs the possible developments we can, which in turn dwarfs the 'major contributions' so far.

Your brain is a neural network of neural networks, during sleep, a cost function is applied across the entire grid. Important aspects of your day are done, and redone at high velocity, simultaneously (leading to dreams).

Cost benefit analysis are done against what you might have done, and the results of that, and actions that would have caused more desirable outcomes are projected, as best as it can see, and the habits, and motor neurons are reconfigured accordingly, this explains why when you get good sleep, and you wake up, you find yourself much better able to do tasks than had you not slept. If you don't sleep, you die.

Source of these points:

Title is misleading, this function also has to do with encoding short term memories to long term memories. Since the mind only has limited space (limited number of neurons to configure), that only the most useful memories are stored into permanent disk. Disruption of the 7 to 9 hour sleep cycle garbage collects the memories that were about to be stored. The mind queues them up to be dealt with the following day, but sometimes are displaced or missed by more passionate things in the present.

Sleep is one of the most important things you can do to maintain your mind and keep it in top running condition for as long as possible, not too little, not too much, sleep in intervals of 90 minutes. If you consume garbage knowledge on a daily basis, your mind will encode that garbage to permanent disk, and you will become that garbage.

Conspiracy theorists suffer from a mental misconfiguration where the cost function applied to the neural network of neural networks suffers from "over fitting". Finding patterns in randomness leading to conclusions are not valid. A lambda function can be applied against the cost function which will alleviate this. I can do it in software, and when I discover the operating principles of the neo cortex, I will be able to fix all the conspiracy nuts in the local nut house. Take care to not take for granted the fresh slate of your mind while you are young, because when you are old, it'll be mostly full and encoding new skills to disk much more difficult, the cost function is more reluctant to modify the grids since doing so would damage your ability to consume resources, find mates and create more of you. Fill you mind with timeless wisdom and get good sleep before your hard disks become full.

I can't say if you are entirely accurate, but you couldn't have explained that in better terms!
Conspiracy theorists suffer from a mental misconfiguration where the cost function applied to the neural network of neural networks suffers from "over fitting". Finding patterns in randomness leading to conclusions are not valid. A lambda function can be applied against the cost function which will alleviate this. I can do it in software, and when I discover the operating principles of the neo cortex, I will be able to fix all the conspiracy nuts in the local nut house.

Wouldn't this work the other way, too? Not finding patterns in what turns out to not be randomness sometimes ends up getting you killed. It's a fine line between paranoia and attention to detail. Anyone with aspirations to "fix" this should probably take that into consideration.

correct, the opposite of overfitting is underfitting, knowing that whenever you talk to joe you get punched, and you have 10 training examples, but this time is different, he's wearing his brown shirt, so it's probably safe now.

not finding the signal from the noise, because joe hitting me 10 times in a row is not conclusive, because most humans never hit me. and joe is wearing new clothing, so it's safe because joe is a human.

"If you don't sleep, you die."

No human has ever died from simply not sleeping (excluding accidents, etc caused by lack of sleep)

See also
Dec 26, 2012 · ajdecon on Why use SVM
Andrew Ng, who teaches the CS 229 Machine Learning course at Stanford, has his lecture notes online: . I have found these useful in the past.

He also teaches the Coursera machine learning course:

Cool stuff... Andrew Ng's excellent Machine Learning course on Coursera also had a programming exercise that involved using k-means to reduce an image's palette (in Octave, though, rather than Python), so if you find this interesting you might consider signing up for his course the next time it comes around:

You can use the same techniques used in Linear Regression (with multiple features) to do Polynomial Regression. For example suppose you have two features x1 and x2, you can add higher order features like x1x2 or x1^2 or x2^2 or a combination of these. While doing linear regression you treat these terms as individual features so x1x2 is a feature say x3. This way you can fit non-linear data with a non-linear curve. However there is a problem of overfitting, i.e your curve may try to be too greedy and fit the data perfectly, but that's not what you want. So Regularization is used to lower the contributions of the higher order terms.

Wikipedia has an article on Polynomial Regression:

P.S I'm doing this course so my knowledge may not be entirely correct so take everything I've said with a pinch of salt. :)

The class is starting again in three weeks:

I'm pretty much in the same boat as you. Looking forwards to it!

I am doing the from Stanford by Andrew Ng & I definitely recommend it.

I'm really excited by all of this free university level material flooding the web as I never even started college due to financial concerns (aka I didn't want to get any loans).

Don't miss out on the original(i.e. before coursera) Andrew Ng lectures, starting here: These are also mathematically more rigorous.
I also see the course. I also definitely recommend it.
Do you know whether Prof. Ng has updated the material since the first run of the class?

We are still in the honeymoon phase of free, online university courses, so I think there's been relatively little criticism of what's available now, but I'll go for it: I was disappointed by the Coursera/Stanford ML class. It was obviously watered down, the homeworks were very (very) easy, and I retained little or nothing of significance.

In contrast, the Caltech class was clearly not watered down, and, as the material was much more focused (with a strong theme of generalization, an idea almost entirely absent from the Stanford class, as I recall) I feel I learned far more.

Another big difference: the Caltech class had traditional hour-long lectures, a simple web form for submitting answers to the multiple-choice* homeworks, and a plain vBulletin forum. The lectures were live on ustream, but otherwise, no fancy infrastructure.

So I think that some interesting questions will come up. Do we need complex (new) platforms to deliver good classes? For me, the answer right now is no -- what clearly matters is the quality and thoughtfulness of the material and how well it is delivered. Can a topic like machine learning be taught effectively to someone who doesn't have a lot of time, or who doesn't have the appropriate background (in CS, math)? Can/should it be faked? I don't think so, but I think there are certainly nuances here.

* Despite being multiple-choice, the homeworks were not easy -- they typically required a lot of thought, and many required writing a lot of code from scratch.

Somewhat of a side-topic, but I just finished the Coursera compilers class. It didn't seem watered down to me, covering regular expression (including NFA and DFA representations), parsing theory and various top-down and bottom-up parsing algorithms, semantics checking (including a formal semantics notation), code generation (with formal operational notation), local and global optimization, register allocation and garbage collection.

I guess it was partially watered down in that the programming part of the class was optional.

The Coursera ML class is nowhere near the Stanford-level class in terms of academic rigor.

That being said, several of my peers who didn't go to the school really appreciated it for its accessibility.

I think the expectation of that class is to render ML education accessible and palatable, not to train everyone at an elite level. As this field grows, I'm sure the needs of various parties would be filled to an extent.

I don't think he has. In hindsight, I guess it does seem watered down - but personally, that is ok, I enjoy the pace / difficulty level right now.

However, I'm glad you pointed it out, because I'm eager to learn about ML & hope to use this (CalTech) material to augment the foundation I get from the coursera class.

I think the courses are great to get an idea of what the subject is about. If you face a related problem at least you will know wether it can be efficently solved. It will allow you to speak to an expert in the field at a basic level at least. That said, they certainly can be greatly improved.
One of the conscious aims of the undergraduate coursera classes has been to lower the bar (in terms of assumed prerequisites, pace, and scope) in order to increase participation.

Daphne Koller's Probabilistic Graphical Models was their first graduate class and it was definitely tougher than other Coursera offerings have been.

This. The Coursera PGM class is the only free online class that I've enrolled in that felt like a similar difficult to a slightly harder than average undergrad course at Caltech (where I go to school).
Have you seen the online courses? (From one of the authors of this paper!)

Prof. Hinton's videos are very watchable:

Yes, often. Thanks for the links.
If you like math, Caltech's "Learning from Data" is awesome
Why is this the top link on HN? There are already numerous courses available that will allow you to learn this stuff for free from very highly ranked universities, including Stanford [1] and CMU [2], among others. This will just teach you similar things while also taking your money and giving you a "certificate".

I guess if you want to enter a new field and you need to have some certifiable expertise, this may be a good option. That being said, if the field you plan on entering really does require some documented education, having this certificate will not even put you in the same playing field as those with actual degrees in the field, not to mention those with advanced degrees.



BigDataUniversity ( also has free courses and cover hadoop and some other stuff.

The site appears to be push sponsor products and don't really talk about alternatives. Expect to see lots of endorsements of IBM products.

For me, I'm wondering if I'm learning the right material. If I self teach, how do I really know if I'm doing this right unless I get feedback? Also, many hiring managers might not recognize self-taught expertise. I might not want to work for those managers, but unfortunately, this includes a very large swath of potential jobs.

Of course, I wonder if this certificate gets me anywhere as far as employment goes.

> Why is this the top link on HN? >

Because this is the response of the "university system" trying to protect its cash flow. Perhaps you missed it, but udacity said recently it will do some certification program for "minimal costs", which was the beginning of the conversation. Education should be free (Khan style), says Udacity.

The University establishment's response is "here we will give you certificates, but you have to give us three grand."

Money for certificates, knowledge for free.

I would think this would be more appropriate to add to an existing skill set in another field.
Yes, there are some good options available online. However, the certificate program and data science are more than just machine learning.
The Coursera courses are excellent, but Coursera, Udacity, CMU, etc. are offering a different set of courses than the UW progam. For instance, I don't think any of the current players are offering Hadoop courses... In general, it looks like the UW program is more technology-specific and applied than the other programs.

Personally, I'd prefer the less technology-specific topics already on offer. But, my employer would be much more likely to hire someone with UW's course-mix. So, there should be some demand for that.

And, if we are talking about a career decision, $3,000 is small potatoes compared to the value of getting the right topics.

Apr 23, 2012 · 2 points, 0 comments · submitted by coconutrandom
Apr 23, 2012 · 7 points, 0 comments · submitted by fooyc
HN Academy is an independent project and is not operated by Y Combinator, Coursera, edX, or any of the universities and other institutions providing courses.
~ yaj@
;laksdfhjdhksalkfj more things ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.