Hacker News Stories and CommentsAll the comments and stories posted to Hacker News that reference this course.
⬐ camlinkeA number of us from the lab have been helping to put this course together. It's lead by Martha White and Adam White - two awesome RL profs at the U of A (Martha now leads RLAI) - and is based very heavily on Rich's textbook. The goal is to provide a really strong foundation for those looking to dive deeper into reinforcement learning. It starts with bandits and works all the way up through function approximation, control, policy gradients, and deep RL.
If you have any questions feel free to ask and I'll do my best to answer.⬐ billconanIn a previous discussion regarding RL learning materials, https://news.ycombinator.com/item?id=20294453
> there's still no great resource to learn RL "from scratch" - there's still a huge gap between Sutton&Barto and implementing DDPG. You have to figure out everything by reading existing implementations, various Medium posts (a lot of them containing errors and imprecisions), and research papers. I wouldn't consider Spinning Up as a beginner-friendly resource, it's too dense/math-heavy. The closest I have found so far is the Udacity course: https://eu.udacity.com/course/deep-reinforcement-learning-na.... which costs $1000
I too think OpenAI's Spinning Up isn't beginner-friendly. But I also don't want to just learn bandits and tic-tac-toe. Will this course fill the gap?⬐ camlinkeAgreed! A lot of material out there is like the "how to draw and owl" meme: https://imgur.com/gallery/RadSf - start with bandits and now do DDPG.
The goal is for this course to provide the foundations for whatever folks want to do in RL after. It starts with bandits but then covers things like TD, Sarsa, Dyna, etc. in the tabular setting. Then folks learn about more advanced topics like linear and non-linear function approximation (read - linear e.g. Tile Coding, non-linear e.g. neural nets/deep rl).
This very much follows the intro RL course taught by Martha/Adam/Rich at the U of A, and follows Rich's textbook really closely.