You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* grammer fixes
* fix nit comments from QA
* add info about on/off policy
* add more context to the code block
* more context and minor fix
* pip3
Co-authored-by: zhuo <[email protected]>
Copy file name to clipboardExpand all lines: docs/Python-Custom-Trainer-Plugin.md
+7-8Lines changed: 7 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,8 +6,9 @@ in `Ml-agents` Package. This will allow rerouting `mlagents-learn` CLI to custom
6
6
with hyper-parameters specific to your new trainers. We will expose a high-level extensible trainer (both on-policy,
7
7
and off-policy trainers) optimizer and hyperparameter classes with documentation for the use of this plugin. For more
8
8
infromation on how python plugin system works see [Plugin interfaces](Training-Plugins.md).
9
-
10
9
## Overview
10
+
Model-free RL algorithms generally fall into two broad categories: on-policy and off-policy. On-policy algorithms perform updates based on data gathered from the current policy. Off-policy algorithms learn a Q function from a buffer of previous data, then use this Q function to make decisions. Off-policy algorithms have three key benefits in the context of ML-Agents: They tend to use fewer samples than on-policy as they can pull and re-use data from the buffer many times. They allow player demonstrations to be inserted in-line with RL data into the buffer, enabling new ways of doing imitation learning by streaming player data.
11
+
11
12
To add new custom trainers to ML-agents, you would need to create a new python package.
12
13
To give you an idea of how to structure your package, we have created a [mlagents_trainer_plugin](../ml-agents-trainer-plugin) package ourselves as an
13
14
example, with implementation of `A2c` and `DQN` algorithms. You would need a `setup.py` file to list extra requirements and
@@ -31,22 +32,20 @@ configuration.
31
32
└── setup.py
32
33
```
33
34
## Installation and Execution
34
-
To install your new package, you need to have `ml-agents-env` and `ml-agents` installed following by the installation of
35
-
plugin package.
35
+
If you haven't already, follow the [installation instructions](Installation.md). Once you have the `ml-agents-env` and `ml-agents` packages you can install the plugin package. From the repository's root directory install `ml-agents-trainer-plugin` (or replace with the name of your plugin folder).
Copy file name to clipboardExpand all lines: docs/Tutorial-Custom-Trainer-Plugin.md
+39-42Lines changed: 39 additions & 42 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,18 +1,15 @@
1
-
### Step 1: Write your own custom trainer class
2
-
Before you start writing your code, make sure to create a python environment:
1
+
### Step 1: Write your custom trainer class
2
+
Before you start writing your code, make sure to use your favorite environment management tool(e.g. `venv` or `conda`) to create and activate a Python virtual environment. The following command uses `conda`, but other tools work similarly:
3
3
```shell
4
-
conda create -n trainer-env python=3.8
4
+
conda create -n trainer-env python=3.8.13
5
+
conda activate trainer-env
5
6
```
6
7
7
8
Users of the plug-in system are responsible for implementing the trainer class subject to the API standard. Let us follow an example by implementing a custom trainer named "YourCustomTrainer". You can either extend `OnPolicyTrainer` or `OffPolicyTrainer` classes depending on the training strategies you choose.
8
9
9
-
Model-free RL algorithms generally fall into two broad categories: on-policy and off-policy. On-policy algorithms rely on performing updates based on data gathered from the current policy. Off-policy algorithms learn a Q function from a buffer of previous data, then use this Q function to make decisions. Off-policy algorithms have three key benefits in the context of ML-Agents:
10
-
They tend to use fewer samples than on-policy as they can pull and re-use data from the buffer many times.
11
-
They allow player demonstrations to be inserted in-line with RL data into the buffer, enabling new ways of doing imitation learning by streaming player data.
12
-
They are conducive to distributed training, where the policy running on other machines may not be synchronized with the current policy.
13
-
However, until recently, off-policy algorithms tended to be more brittle, had difficulty with exploration, and were usually not as useful for continuous control problems. Soft Actor-Critic (Haarnoja et. al, 2018) is an off-policy algorithm that combines the sample-efficiency of Q-learning with the stochasticity of a policy-gradient method such as PPO.
10
+
Please refer to the internal [PPO implementation](../ml-agents/mlagents/trainers/ppo/trainer.py) for a complete code example. We will not provide a workable code in the document. The purpose of the tutorial is to introduce you to the core components and interfaces of our plugin framework. We use code snippets and patterns to demonstrate the control and data flow.
14
11
15
-
Your custom trainers are Responsible for collecting experiences and training the models. Your custom trainer class acts like a co-ordinator to the policy and optimizer. To start implement methods in the class, create a policy and an optimizer class objects:
12
+
Your custom trainers are responsible for collecting experiences and training the models. Your custom trainer class acts like a co-ordinator to the policy and optimizer. To start implementing methods in the class, create a policy class objects from method `create_policy`:
16
13
17
14
18
15
```python
@@ -44,9 +41,9 @@ def create_policy(
44
41
45
42
```
46
43
47
-
Depending on whether you use shared or separate network architecuture for your policy, we provide `SimpleActor` and `SharedActorCritic` from `mlagents.trainers.torch_entities.networks` that you can choose from. In our example above, we use a `SimpleActor`
44
+
Depending on whether you use shared or separate network architecture for your policy, we provide `SimpleActor` and `SharedActorCritic` from `mlagents.trainers.torch_entities.networks` that you can choose from. In our example above, we use a `SimpleActor`.
48
45
49
-
Next, create an optimizer class object from `create_optimizer` method:
46
+
Next, create an optimizer class object from `create_optimizer` method and connect it to the policy object you created above:
There are a couple abstract methods(`_process_trajectory` and `_update_policy`) inherited from `RLTrainer` you need to implement in your custom trainer class. `_process_trajectory` takes a trajectory and processes it, puts it into the update buffer. Processing involves calculating value and advantage targets for the model updating step. Given input `trajectory: Trajectory`, users are responsible for processing the data in the trajectory and append `agent_buffer_trajectory` to the back of update buffer by calling `self._append_to_update_buffer(agent_buffer_trajectory)`, whose output will be used in updating the model in `optimizer` class.
57
+
There are a couple of abstract methods(`_process_trajectory` and `_update_policy`) inherited from `RLTrainer` that you need to implement in your custom trainer class. `_process_trajectory` takes a trajectory and processes it, putting it into the update buffer. Processing involves calculating value and advantage targets for the model updating step. Given input `trajectory: Trajectory`, users are responsible for processing the data in the trajectory and append `agent_buffer_trajectory` to the back of the update buffer by calling `self._append_to_update_buffer(agent_buffer_trajectory)`, whose output will be used in updating the model in `optimizer` class.
58
+
59
+
A typical `_process_trajectory` function(incomplete) will convert a trajectory object to an agent buffer then get all value estimates from the trajectory by calling `self.optimizer.get_trajectory_value_estimates`. From the returned dictionary of value estimates we extract reward signals keyed by their names:
61
60
62
-
A typical `_process_trajectory` function(incomplete) - would look like the following:
A trajectory will be a list of dictionaries of string to anything. When calling forward on a policy, the argument will include an “experience” dict of string to anything from the last step. The forward method will generate action and the next “experience” dictionary. Examples of fields in the “experience” dictionary include observation, action, reward, done status, group_reward, LSTM memory state, etc...
106
+
A trajectory will be a list of dictionaries of strings mapped to `Anything`. When calling `forward` on a policy, the argument will include an “experience” dictionary from the last step. The `forward` method will generate an action and the next “experience” dictionary. Examples of fields in the “experience” dictionary include observation, action, reward, done status, group_reward, LSTM memory state, etc.
109
107
110
108
111
109
112
110
### Step 2: implement your custom optimizer for the trainer.
113
-
We will show you an example we implemented - `class TorchPPOOptimizer(TorchOptimizer)`, Which Takes a Policy and a Dict of trainer parameters and creates an Optimizer around the policy. Your optimizer should include a value estimator and a loss function in the update method
111
+
We will show you an example we implemented - `class TorchPPOOptimizer(TorchOptimizer)`, which takes a Policy and a Dict of trainer parameters and creates an Optimizer that connects to the policy. Your optimizer should include a value estimator and a loss function in the `update` method.
114
112
115
-
Before writing your optimizer class, first define setting class `class PPOSettings(OnPolicyHyperparamSettings):
116
-
` for your custom optimizer:
113
+
Before writing your optimizer class, first define setting class `class PPOSettings(OnPolicyHyperparamSettings)` for your custom optimizer:
117
114
118
115
119
116
@@ -130,15 +127,15 @@ class PPOSettings(OnPolicyHyperparamSettings):
130
127
131
128
```
132
129
133
-
You should implement `update` function:
130
+
You should implement `update` function following interface:
Calculate losses and other metrics from an `AgentBuffer` generated from your trainer class, a typical pattern(incomplete) would like this:
138
+
In which losses and other metrics are calculated from an `AgentBuffer`that is generated from your trainer class, depending on which model you choose to implement the loss functions will be different. In our case we calculate value loss from critic and trust region policy loss. A typical pattern(incomplete) of the calculations will look like the following:
142
139
143
140
144
141
```python
@@ -173,7 +170,7 @@ loss = (
173
170
174
171
```
175
172
176
-
Update the model and return the a dictionary including calculated losses and updated decay learning rate:
173
+
Finally update the model and return the a dictionary including calculated losses and updated decay learning rate:
177
174
178
175
179
176
```python
@@ -183,8 +180,6 @@ loss.backward()
183
180
184
181
self.optimizer.step()
185
182
update_stats = {
186
-
#NOTE: abs() is not technically correct, but matches the behavior in TensorFlow.
187
-
#TODO: After PyTorch is default, change to something more correct.
Create a clean python environment with python 3.8+ before you start.
259
+
Create a clean Python environment with Python 3.8+ and activate it before you start, if you haven't done so already:
265
260
```shell
266
-
conda create -n trainer-env python=3.8
261
+
conda create -n trainer-env python=3.8.13
262
+
conda activate trainer-env
267
263
```
268
264
269
-
Make sure you follow previous steps and install all required packages. We are testing internal implementations here, but ML-Agents users can run similar validations once they have their own implementation installed:
265
+
Make sure you follow previous steps and install all required packages. We are testing internal implementations in this tutorial, but ML-Agents users can run similar validations once they have their own implementations installed:
Once your package is added as an entrypoint and you can use a config file with new trainer. Check if trainer type is specified in the config file `a2c_3DBall.yaml`:
270
+
Once your package is added as an `entrypoint`, you can add to the config file the new trainer type. Check if trainer type is specified in the config file `a2c_3DBall.yaml`:
275
271
```
276
272
trainer_type: a2c
277
273
```
278
274
279
-
Test if custom trainer package is install:
275
+
Test if custom trainer package is installed by running:
You can also list all trainers installed in the registry. Type `python` in your shell to open a REPL session. Run the python code below, you should be able to see all trainer types currently installed:
281
+
```python
282
+
>>>import pkg_resources
283
+
>>>for entry in pkg_resources.iter_entry_points('mlagents.trainer_type'):
If it is properly installed, you will see Unity logo and message indicating training will start:
285
292
```
286
293
[INFO] Listening on port 5004. Start training by pressing the Play button in the Unity Editor.
287
294
```
288
295
289
-
If you see the following error message, it could be due to train type is wrong or the trainer type specified is not installed:
296
+
If you see the following error message, it could be due to trainer type is wrong or the trainer type specified is not installed:
290
297
```shell
291
298
mlagents.trainers.exception.TrainerConfigError: Invalid trainer type a2c was found
292
299
```
293
300
294
-
You can also check all trainers installed in the registry. Type `python` in your shell to open a REPL session. Run the python code below, you should be able to see all trainer types installed:
295
-
```python
296
-
>>>import pkg_resources
297
-
>>>for entry in pkg_resources.iter_entry_points('mlagents.trainer_type'):
0 commit comments