Skip to content

araffin/rlss23-dqn-tutorial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Reinforcement Learning Summer School 2023/2026 - DQN Tutorial

From Tabular Q-Learning to DQN

Blog post: https://araffin.github.io/post/rl102/

Website: https://rlsummerschool.com/

Slides: https://araffin.github.io/slides/dqn-tutorial/

Stable-Baselines3 repo: https://github.com/DLR-RM/stable-baselines3

RL Virtual School 2021: https://github.com/araffin/rl-handson-rlvs21

RL Summer School 2023: https://rlsummerschool.com/2023/

RL Summer School 2026: https://2026.rlsummerschool.com/

Content

  1. Fitted Q-Iteration (FQI) Colab Notebook
  2. Deep Q-Network (DQN) Part I: DQN Components: Replay Buffer, Q-Network, ... Colab Notebook
  3. Deep Q-Network (DQN) Part II: DQN Update and Training Loop Colab Notebook

Run Locally (instead of using Google colab)

  1. Install uv
  2. Run uv run jupyter lab notebooks

Solutions

Solutions can be found in the notebooks/solutions/ folder. The code in dqn_tutorial package can also be used to bypass some exercises.

About

Deep Q-Network (DQN) and Fitted Q-Iteration (FQI) tutorial for RL Summer School 2023 and 2026

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors