Skip to content

Reinforcement Learning (RL)! This repository is your hands-on guide to implementing RL algorithms, from Markov Decision Processes (MDPs) to advanced methods like PPO and DDPG. Build smart agents, learn the math behind policies, and experiment with real-world applications!

License

Notifications You must be signed in to change notification settings

shaheennabi/Reinforcement-Learning-Zero-to-Hero

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

216 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Reinforcement Learninig: Zero to Hero

this repository was planned, to have theory of every topic, but I dont get enough time to document everything. now only necessary and popular algorithms will be covered.

This reinforcement learning series is inspired by the Google DeepMind XUCL Reinforcement Learning YouTube series.
Everything in this repository is structured according to the content taught by Research Scientist Hado van Hasselt, Diana Borsa, and Research Engineer Matteo Hessel.


About This Repository

This repository is built for:

  • Heavy Theory (even at research level)
  • Core deep dive into algorithms
  • Algorithm implementations
  • Experiments
  • Reference code

This repository is NOT meant to be a theory book.

All theory, learning roadmap, deep explanation, intuition, and structured lectures will be published on Substack.


Important Note for Visitors

If you are following this repository, do not start learning only from code.

The actual learning path will live on Substack. (with code explanations on Substack) This GitHub repository exists only to support learning through implementations.

Once Substack posts are available:

  • You should read theory on Substack first (with code explainations there: to make it easy theory with code-- explained in simple english)
  • Then return to GitHub for code and experiments (full code setup of algorithms and model's etc)

Learning happens on Substack.
Full Scale Implementation happens on GitHub.


How to Read This Repo

Each topic in this repo contains two folders:

  • theory
  • algorithms

Folder usage:

  • The theory folder will contain links to Substack articles (when published).
  • The algorithms folder contains Python implementations.

Substack is used because it allows:

  • Visual explanations
  • Code walkthroughs
  • Mathematical clarity
  • Better structured learning than GitHub Markdown

GitHub is used for:

  • Code
  • Experiments
  • Reproducible implementations

Status

This repository is currently in progress.

Content, structure, and implementations will be added continuously.


Vision

This project is designed as:

Learn from Substack (deep dive at research level: with code explained in simple english, with visuals).
Practice on GitHub.

If you are serious about mastering Reinforcement Learning and Deep Reinforcement Learning, follow both together.


Who This Repository Is For

This repository is designed for everyone, including:

  • High school students starting Reinforcement Learning
  • College students learning AI / ML
  • Data Scientists
  • Machine Learning Engineers
  • Software Developers
  • Research Engineers
  • AI Researchers
  • Self-taught learners
  • Anyone transitioning into AI / Deep Learning

The content is structured so that:

  • Beginners can build foundations step-by-step
  • Intermediate learners can strengthen understanding
  • Advanced learners can implement research-level algorithms
  • Researchers can experiment with modern methods

No matter your background, if you are serious about learning Reinforcement Learning, this repository is built to support you.

About

Reinforcement Learning (RL)! This repository is your hands-on guide to implementing RL algorithms, from Markov Decision Processes (MDPs) to advanced methods like PPO and DDPG. Build smart agents, learn the math behind policies, and experiment with real-world applications!

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages