Homeworks and labs of the Reinforcement Learning course of the MSc in Artificial Intelligence at the University of Amsterdam. Joint work with Gabriele Bani.
Both homeworks and labs are heavily based on Reinforcement Learning: An Introduction, 2nd Edition by Richard S. Sutton and Andrew G. Barto and A Survey on Policy Search for Robotics by Marc Peter Deisenroth, Gerhard Neumann and Jan Peters.
Main topics: Bandits, Dynamic Programming, Monte Carlo methods, TD-methods (TD, SARSA, Q-Learning), Model-based Learning (Dyna, Monte Carlo Tree Search).
Main topics: Value-function approximation (Semi-gradient methods, Linear value-function approximation, Neural Networks); Policy Gradient methods (REINFORCE, Compatible Function Approximation Theorem, Natural Gradient)
Problem statement and Solution
Figure: Performance of Every-visit Monte Carlo on Easy21
Figure: Performance of Sarsa and Q-learning on Easy21
Problem statement and Solution
Figure: Performance of A2C (left), DQN (middle) and REINFORCE (right) on CartPole
Refer to each notebook name and run Jupyter notebook with a following command:
jupyter notebook #notebook#.ipynb
Copyright © 2018 Andrii Skliar.
This project is distributed under the MIT license. This was developed as part of the Reinforcement Learning course taught by Herke van Hoof at the University of Amsterdam. Please follow the UvA regulations governing Fraud and Plagiarism in case you are a student.