Skip to content

Commit 0abb10a

Browse files
authored
Version 0.9.6: Notebooks / Model Performance Evaluation (#10)
* Added: - Interpreter (Cleaner) with cleaner code / closer to fastai - To and From Pickle - to_csv Notes: - I tried doing a from_csv implementation, however I am seeing that something like this might not be possible unless using system stuff. Not sure when I will ever get to this. I have some ideas about saving images / states as files with file paths... Maybe to_csv generates a file system also? * Added: - Group Interpreter for combining model runs - Initial fixed dqn notebook (soft of) Fixed: - recorder callback ordering - renaming. It seems that fasti has some cool in-notebook test widgets that we might want to use in the future * Added: - Group Interpreter merging - DQN base notebook - Interpreters with by default close envs Fixed: - env closing <- might be a continuous issue due to different physics engines * Fixed: - setup.py fastai needs to be min 1.0.59 * Fixed: - cpu / device issues. * Added: - DQN Group Results - Reward Metric Notes: - I am realizing that we need sum reward smoothing. The graphs are way too messy. * Added: - Analysis property to the group interpretation * Fixed: - PER crashing due to containing 0 items * Added: - Group Interpretation value smoothing * Fixed: - Value smoothing making the reward values way too big - Tests take too long. If Image input, just do a shorter fit cycle - PER batch size not updating - Tests take too long. If Image input, just do a shorter fit cycle - cuda issues - Bounds n_possible_values is only calculated when used. Should make iteration faster. Added: - Smoothing for the scalar plotting * More test fixing * Fixed: - cuda issues * Added: - Lunary Lander performance test * Added: - minigrid compat - normalization module for dqns using Bounds object * Fixed: - Normalizing cuda error * Fixed: - DDPG cuda error * Fixed: - pybullet human rendering. Pybullet renders differently from regular openai envs. Basically if you want to see what is happening, the ender method needs to be executed prior to reset. Added: - DDPG testing - ddpg env runs - more results - more ddpg tests - walker2d data * Fixed: - Possibly pybullet envs from crashing. There was an issue where the pybullet wrapper was not being added :( * Version 0.9.5 mass refactor (#12) * Added: - Refactored DQN code - DQN learner basic Fixed: - DQN model crashing * Added: - All DQNs pass tests * Fixed: - Some dqn / gym_maze / embedding related crashes - DQN test code and actual DQN tests * Added: - Maze heap map interpreter - Started q value interpreter * Fixed: - DDPG GPU issue. Sampling / action and state objects support to device calls. - DQN GPU issue. - azure pipeline test * Updated: - jupyter notebooks * Removed: - old code files * Fixed: - metrics, ddpg tests * Added: - basic q value plotting - basic q value plotting for ddpg * Updated Version * Changed: - Setup.py excludes some third arty packages due to pypi restriction. Need to find a way around this. * Removed: - old code from README. Revisions coming. * Added: - batch norm toggling. For now / forever defaulted to false * Version 0 9 5 mass refactor (#13) * Added: - revised test script - Slowly adding tests. * Fixed: - somehow trained_learner method in test was completely broken * Added: - Interpreter edge control. can also show average line * Fixed: - models being all shitty. Apparently, batch norm reaaally screws them up. If you use batch norm, the batch size needs to be massive (128 wasnt large enough). By default, you can kind of turn off batch_norm in the Tabular models, but they still, when given a continuous input, will have an entry batch norm. I over-wrote it and now they work significantly better :) * Updated: - gitignore
1 parent 6364d54 commit 0abb10a

File tree

83 files changed

+4726
-2385
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

83 files changed

+4726
-2385
lines changed

.gitignore

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,12 @@ gen
66
.gitignore
77

88
# Jupyter Notebook
9-
/fast_rl/notebooks/.ipynb_checkpoints/
9+
*/.ipynb_checkpoints/*
1010

1111
# Data Files
12-
/docs_src/data/*
12+
#/docs_src/data/*
13+
14+
# Build Files / Directories
15+
build/*
16+
dist/*
17+
fast_rl.egg-info/*

README.md

Lines changed: 17 additions & 133 deletions
Original file line numberDiff line numberDiff line change
@@ -20,9 +20,6 @@ However there are also frameworks in PyTorch most notably Facebook's Horizon:
2020
- [Horizon](https://github.com/facebookresearch/Horizon)
2121
- [DeepRL](https://github.com/ShangtongZhang/DeepRL)
2222

23-
Our motivation is that existing frameworks commonly use tensorflow, which nothing against tensorflow, but we have
24-
accomplished more in shorter periods of time using PyTorch.
25-
2623
Fastai for computer vision and tabular learning has been amazing. One would wish that this would be the same for RL.
2724
The purpose of this repo is to have a framework that is as easy as possible to start, but also designed for testing
2825
new agents.
@@ -72,141 +69,28 @@ working at their best. Post 1.0.0 will be more formal feature development with C
7269
**Critical**
7370
Testable code:
7471
```python
75-
from fast_rl.agents.DQN import DQN
76-
from fast_rl.core.basic_train import AgentLearner
77-
from fast_rl.core.MarkovDecisionProcess import MDPDataBunch
78-
79-
data = MDPDataBunch.from_env('maze-random-5x5-v0', render='human')
80-
model = DQN(data)
81-
learn = AgentLearner(data, model)
82-
learn.fit(450)
83-
```
84-
Result:
85-
86-
| ![](res/pre_interpretation_maze_dqn.gif) |
87-
|:---:|
88-
| *Fig 1: We are now able to train an agent using some Fastai API* |
89-
90-
91-
We believe that the agent explodes after the first episode. Not to worry! We will make a RL interpreter to see whats
92-
going on!
93-
94-
- [X] 0.2.0 AgentInterpretation: First method will be heatmapping the image / state space of the
95-
environment with the expected rewards for super important debugging. In the code above, we are testing with a maze for a
96-
good reason. Heatmapping rewards over a maze is pretty easy as opposed to other environments.
97-
98-
Usage example:
99-
```python
100-
from fast_rl.agents.DQN import DQN
101-
from fast_rl.core.Interpreter import AgentInterpretationAlpha
102-
from fast_rl.core.basic_train import AgentLearner
103-
from fast_rl.core.MarkovDecisionProcess import MDPDataBunch
104-
105-
data = MDPDataBunch.from_env('maze-random-5x5-v0', render='human')
106-
model = DQN(data)
107-
learn = AgentLearner(data, model)
108-
learn.fit(10)
109-
110-
# Note that the Interpretation is broken, will be fixed with documentation in 0.9
111-
interp = AgentInterpretationAlpha(learn)
112-
interp.plot_heatmapped_episode(5)
113-
```
114-
115-
| ![](res/heat_map_1.png) |
116-
|:---:|
117-
| *Fig 2: Cumulative rewards calculated over states during episode 0* |
118-
| ![](res/heat_map_2.png) |
119-
| *Fig 3: Episode 7* |
120-
| ![](res/heat_map_3.png) |
121-
| *Fig 4: Unimportant parts are excluded via reward penalization* |
122-
| ![](res/heat_map_4.png) |
123-
| *Fig 5: Finally, state space is fully explored, and the highest rewards are near the goal state* |
124-
125-
If we change:
126-
```python
127-
interp = AgentInterpretationAlpha(learn)
128-
interp.plot_heatmapped_episode(epoch)
129-
```
130-
to:
131-
```python
132-
interp = AgentInterpretationAlpha(learn)
133-
interp.plot_episode(epoch)
134-
```
135-
We can get the following plots for specific episodes:
136-
137-
| ![](res/reward_plot_1.png) |
138-
|:----:|
139-
| *Fig 6: Rewards estimated by the agent during episode 0* |
140-
141-
As determined by our AgentInterpretation object, we need to either debug or improve our agent.
142-
We will do this in parallel with creating our Learner fit function.
143-
144-
- [X] 0.3.0 Add DQNs: DQN, Dueling DQN, Double DQN, Fixed Target DQN, DDDQN.
145-
- [X] 0.4.0 Learner Basic: We need to convert this into a suitable object. Will be similar to the basic fasai learner
146-
hopefully. Possibly as add prioritize replay?
147-
- Added PER.
148-
- [X] 0.5.0 DDPG Agent: We need to have at least one agent able to perform continuous environment execution. As a note, we
149-
could give discrete agents the ability to operate in a continuous domain via binning.
150-
- [X] 0.5.0 DDPG added. let us move
151-
- [X] 0.5.0 The DDPG paper contains a visualization for Q learning might prove useful. Add to interpreter.
152-
153-
| ![](res/ddpg_balancing.gif) |
154-
|:----:|
155-
| *Fig 7: DDPG trains stably now..* |
156-
157-
158-
Added q value interpretation per explanation by Lillicrap et al., 2016. Currently both models (DQN and DDPG) have
159-
unstable q value approximations. Below is an example from DQN.
160-
```python
161-
interp = AgentInterpretationAlpha(learn, ds_type=DatasetType.Train)
162-
interp.plot_q_density(epoch)
163-
```
164-
Can be referenced in `fast_rl/tests/test_interpretation` for usage. A good agent will have mostly a diagonal line,
165-
a failing one will look globular or horizontal.
166-
167-
| ![](res/dqn_q_estimate_1.jpg) |
168-
|:----:|
169-
| *Fig 8: Initial Q Value Estimate. Seems globular which is expected for an initial model.* |
170-
171-
| ![](res/dqn_q_estimate_2.jpg) |
172-
|:----:|
173-
| *Fig 9: Seems like the DQN is not learning...* |
174-
175-
| ![](res/dqn_q_estimate_3.jpg) |
176-
|:----:|
177-
| *Fig 10: Alarming later epoch results. It seems that the DQN converges to predicting a single Q value.* |
178-
179-
- [X] 0.6.0 Single Global fit function like Fastai's. Think about the missing batch step. Noted some of the changes to
180-
the existing the Fastai
181-
182-
| ![](res/fit_func_out.jpg) |
183-
|:----:|
184-
| *Fig 11: Resulting output of a typical fit function using ref code below.* |
185-
186-
```python
187-
from fast_rl.agents.DQN import DuelingDQN
188-
from fast_rl.core.Learner import AgentLearner
189-
from fast_rl.core.MarkovDecisionProcess import MDPDataBunch
190-
191-
192-
data = MDPDataBunch.from_env('maze-random-5x5-v0', render='human', max_steps=1000)
193-
model = DuelingDQN(data)
194-
# model = DQN(data)
195-
learn = AgentLearner(data, model)
196-
197-
learn.fit(5)
72+
from fast_rl.agents.dqn import *
73+
from fast_rl.agents.dqn_models import *
74+
from fast_rl.core.agent_core import ExperienceReplay, GreedyEpsilon
75+
from fast_rl.core.data_block import MDPDataBunch
76+
from fast_rl.core.metrics import *
77+
78+
data = MDPDataBunch.from_env('CartPole-v1', render='rgb_array', bs=32, add_valid=False)
79+
model = create_dqn_model(data, FixedTargetDQNModule, opt=torch.optim.RMSprop, lr=0.00025)
80+
memory = ExperienceReplay(memory_size=1000, reduce_ram=True)
81+
exploration_method = GreedyEpsilon(epsilon_start=1, epsilon_end=0.1, decay=0.001)
82+
learner = dqn_learner(data=data, model=model, memory=memory, exploration_method=exploration_method)
83+
learner.fit(10)
19884
```
19985

200-
reset commit
201-
20286
- [X] 0.7.0 Full test suite using multi-processing. Connect to CI.
20387
- [X] 0.8.0 Comprehensive model eval **debug/verify**. Each model should succeed at at least a few known environments. Also, massive refactoring will be needed.
204-
- [ ] **Working on** 0.9.0 Notebook demonstrations of basic model usage.
205-
- [ ] **1.0.0** Base version is completed with working model visualizations proving performance / expected failure. At
88+
- [X] 0.9.0 Notebook demonstrations of basic model usage.
89+
- [ ] **Working on** **1.0.0** Base version is completed with working model visualizations proving performance / expected failure. At
20690
this point, all models should have guaranteed environments they should succeed in.
207-
- [ ] 1.2.0 Add PyBullet Fetch Environments
208-
- [ ] 1.2.0 Not part of this repo, however the envs need to subclass the OpenAI `gym.GoalEnv`
209-
- [ ] 1.2.0 Add HER
91+
- [ ] 1.8.0 Add PyBullet Fetch Environments
92+
- [ ] 1.8.0 Not part of this repo, however the envs need to subclass the OpenAI `gym.GoalEnv`
93+
- [ ] 1.8.0 Add HER
21094

21195

21296
## Code

azure-pipelines.yml

Lines changed: 36 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -3,43 +3,42 @@
33
# Add steps that build, run tests, deploy, and more:
44
# https://aka.ms/yaml
55

6+
# - bash: "sudo apt-get install -y xvfb freeglut3-dev python-opengl --fix-missing"
7+
# displayName: 'Install ffmpeg, freeglut3-dev, and xvfb'
8+
69
trigger:
710
- master
811

9-
pool:
10-
vmImage: 'ubuntu-18.04'
11-
12-
steps:
13-
14-
#- bash: "sudo apt-get install -y ffmpeg xvfb freeglut3-dev python-opengl"
15-
# displayName: 'Install ffmpeg, freeglut3-dev, and xvfb'
16-
17-
- task: UsePythonVersion@0
18-
inputs:
19-
versionSpec: '3.7'
20-
21-
# - script: sh ./build/azure_pipeline_helper.sh
22-
# displayName: 'Complex Installs'
23-
24-
- script: |
25-
# pip install Bottleneck
26-
# python setup.py install
27-
pip install pytest
28-
pip install pytest-cov
29-
displayName: 'Install Python Packages'
30-
31-
- script: |
32-
xvfb-run -s "-screen 0 1400x900x24" pytest tests --doctest-modules --junitxml=junit/test-results.xml --cov=./ --cov-report=xml --cov-report=html
33-
displayName: 'Test with pytest'
34-
35-
- task: PublishTestResults@2
36-
condition: succeededOrFailed()
37-
inputs:
38-
testResultsFiles: '**/test-*.xml'
39-
testRunTitle: 'Publish test results for Python $(python.version)'
40-
41-
- task: PublishCodeCoverageResults@1
42-
inputs:
43-
codeCoverageTool: Cobertura
44-
summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/coverage.xml'
45-
reportDirectory: '$(System.DefaultWorkingDirectory)/**/htmlcov'
12+
jobs:
13+
- job: 'Test'
14+
pool:
15+
vmImage: 'ubuntu-16.04' # other options: 'macOS-10.13', 'vs2017-win2016'
16+
strategy:
17+
matrix:
18+
Python36:
19+
python.version: '3.6'
20+
steps:
21+
- task: UsePythonVersion@0
22+
inputs:
23+
versionSpec: '$(python.version)'
24+
25+
- bash: "sudo apt-get install -y freeglut3-dev python-opengl"
26+
displayName: 'Install freeglut3-dev'
27+
28+
- script: |
29+
python -m pip install --upgrade pip setuptools wheel pytest pytest-cov -e .
30+
python setup.py install
31+
displayName: 'Install dependencies'
32+
33+
- script: sh ./build/azure_pipeline_helper.sh
34+
displayName: 'Complex Installs'
35+
36+
- script: |
37+
xvfb-run -s "-screen 0 1400x900x24" py.test tests --cov fast_rl --cov-report html --doctest-modules --junitxml=junit/test-results.xml --cov=./ --cov-report=xml --cov-report=html
38+
displayName: 'Test with pytest'
39+
40+
- task: PublishTestResults@2
41+
condition: succeededOrFailed()
42+
inputs:
43+
testResultsFiles: '**/test-*.xml'
44+
testRunTitle: 'Publish test results for Python $(python.version)'

build/azure_pipeline_helper.sh

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
#!/usr/bin/env bash
22

3-
# Install pybullet
4-
git clone https://github.com/benelot/pybullet-gym.git
5-
cd pybullet-gym
6-
pip install -e .
7-
cd ../
3+
## Install pybullet
4+
#git clone https://github.com/benelot/pybullet-gym.git
5+
#cd pybullet-gym
6+
#pip install -e .
7+
#cd ../
88

9-
# Install gym_maze
10-
git clone https://github.com/MattChanTK/gym-maze.git
11-
cd gym-maze
12-
python setup.py install
13-
cd ../
9+
## Install gym_maze
10+
#git clone https://github.com/MattChanTK/gym-maze.git
11+
#cd gym-maze
12+
#python setup.py install
13+
#cd ../
1414

33.5 KB
Binary file not shown.
33.5 KB
Binary file not shown.
33.5 KB
Binary file not shown.
33.5 KB
Binary file not shown.
Binary file not shown.
Binary file not shown.

0 commit comments

Comments
 (0)