Skip to content

Commit 748b673

Browse files
committed
Add readme
1 parent 5d38241 commit 748b673

File tree

1 file changed

+21
-0
lines changed

1 file changed

+21
-0
lines changed

sota-implementations/a3c/README.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
# Reproducing Asynchronous Advantage Actor Critic (A3C) Algorithm Results
2+
3+
This repository contains scripts that enable training agents using the Asynchronous Advantage Actor Critic (A3C) Algorithm on Atari environments. We follow the original paper [Asynchronous Methods for Deep Reinforcement Learning](https://arxiv.org/abs/1602.01783) by Mnih et al. (2016) to implement the A3C algorithm with a fixed number of steps during the collection phase.
4+
5+
## Examples Structure
6+
7+
Please note that each example is independent of each other for the sake of simplicity. Each example contains the following files:
8+
9+
1. **Main Script:** The definition of algorithm components and the training loop can be found in the main script (e.g. `a3c_atari.py`).
10+
11+
2. **Utils File:** A utility file is provided to contain various helper functions, generally to create the environment and the models (e.g. `utils_atari.py`).
12+
13+
3. **Configuration File:** This file includes default hyperparameters specified in the original paper. Users can modify these hyperparameters to customize their experiments (e.g. `config_atari.yaml`).
14+
15+
## Running the Examples
16+
17+
You can execute the A3C algorithm on Atari environments by running the following command:
18+
19+
```bash
20+
python a3c_atari.py
21+
```

0 commit comments

Comments
 (0)