Skip to content

Commit 7a2a922

Browse files
sankalp04Ervin T
authored andcommitted
Fix docs for Generalization (#2334)
* Fix naming conventions for consistency * Add generalization link to ML-Agents Overview * Add generalization to main Readme * Include types of samplers available for use
1 parent b78c1e0 commit 7a2a922

File tree

5 files changed

+56
-14
lines changed

5 files changed

+56
-14
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ developer communities.
2727
* 10+ sample Unity environments
2828
* Support for multiple environment configurations and training scenarios
2929
* Train memory-enhanced agents using deep reinforcement learning
30-
* Easily definable Curriculum Learning scenarios
30+
* Easily definable Curriculum Learning and Generalization scenarios
3131
* Broadcasting of agent behavior for supervised learning
3232
* Built-in support for Imitation Learning
3333
* Flexible agent control with On Demand Decision Making
@@ -77,11 +77,11 @@ If you run into any problems using the ML-Agents toolkit,
7777
[submit an issue](https://github.com/Unity-Technologies/ml-agents/issues) and
7878
make sure to include as much detail as possible.
7979

80-
Your opinion matters a great deal to us. Only by hearing your thoughts on the Unity ML-Agents Toolkit can we continue to improve and grow. Please take a few minutes to [let us know about it](https://github.com/Unity-Technologies/ml-agents/issues/1454).
80+
Your opinion matters a great deal to us. Only by hearing your thoughts on the Unity ML-Agents Toolkit can we continue to improve and grow. Please take a few minutes to [let us know about it](https://github.com/Unity-Technologies/ml-agents/issues/1454).
8181

8282

8383
For any other questions or feedback, connect directly with the ML-Agents
84-
84+
8585

8686
## Translations
8787

File renamed without changes.

docs/ML-Agents-Overview.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -320,7 +320,8 @@ actions from the human player to learn a policy. [Video
320320
Link](https://youtu.be/kpb8ZkMBFYs).
321321

322322
ML-Agents provides ways to both learn directly from demonstrations as well as
323-
use demonstrations to help speed up reward-based training. The
323+
use demonstrations to help speed up reward-based training, and two algorithms to do
324+
so (Generative Adversarial Imitation Learning and Behavioral Cloning). The
324325
[Training with Imitation Learning](Training-Imitation-Learning.md) tutorial
325326
covers these features in more depth.
326327

@@ -421,6 +422,14 @@ training process.
421422
the broadcasting feature
422423
[here](Learning-Environment-Design-Brains.md#using-the-broadcast-feature).
423424

425+
- **Training with Environment Parameter Sampling** - To train agents to be robust
426+
to changes in its environment (i.e., generalization), the agent should be exposed
427+
to a variety of environment variations. Similarly to Curriculum Learning, which
428+
allows environments to get more difficult as the agent learns, we also provide
429+
a way to randomly resample aspects of the environment during training. See
430+
[Training with Environment Parameter Sampling](Training-Generalization-Learning.md)
431+
to learn more about this feature.
432+
424433
- **Docker Set-up (Experimental)** - To facilitate setting up ML-Agents without
425434
installing Python or TensorFlow directly, we provide a
426435
[guide](Using-Docker.md) on how to create and run a Docker container.

docs/Training-Generalization-Learning.md

Lines changed: 42 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,9 @@ Ball scale of 0.5 | Ball scale of 4
1818
_Variations of the 3D Ball environment._
1919

2020
To vary environments, we first decide what parameters to vary in an
21-
environment. These parameters are known as `Reset Parameters`. In the 3D ball
22-
environment example displayed in the figure above, the reset parameters are `gravity`, `ball_mass` and `ball_scale`.
21+
environment. We call these parameters `Reset Parameters`. In the 3D ball
22+
environment example displayed in the figure above, the reset parameters are
23+
`gravity`, `ball_mass` and `ball_scale`.
2324

2425

2526
## How-to
@@ -31,17 +32,17 @@ can be done either deterministically or randomly.
3132
This is done by assigning each reset parameter a sampler, which samples a reset
3233
parameter value (such as a uniform sampler). If a sampler isn't provided for a
3334
reset parameter, the parameter maintains the default value throughout the
34-
training, remaining unchanged. The samplers for all the reset parameters are
35-
handled by a **Sampler Manager**, which also handles the generation of new
35+
training procedure, remaining unchanged. The samplers for all the reset parameters
36+
are handled by a **Sampler Manager**, which also handles the generation of new
3637
values for the reset parameters when needed.
3738

3839
To setup the Sampler Manager, we setup a YAML file that specifies how we wish to
3940
generate new samples. In this file, we specify the samplers and the
40-
`resampling-duration` (number of simulation steps after which reset parameters are
41+
`resampling-interval` (number of simulation steps after which reset parameters are
4142
resampled). Below is an example of a sampler file for the 3D ball environment.
4243

4344
```yaml
44-
episode-length: 5000
45+
resampling-interval: 5000
4546

4647
mass:
4748
sampler-type: "uniform"
@@ -59,7 +60,7 @@ scale:
5960

6061
```
6162

62-
* `resampling-duration` (int) - Specifies the number of steps for agent to
63+
* `resampling-interval` (int) - Specifies the number of steps for agent to
6364
train under a particular environment configuration before resetting the
6465
environment with a new sample of reset parameters.
6566

@@ -77,8 +78,40 @@ environment, then this specification will be ignored.
7778
key under the `multirange_uniform` sampler for the gravity reset parameter.
7879
The key name should match the name of the corresponding argument in the sampler definition. (Look at defining a new sampler method)
7980

81+
8082
The sampler manager allocates a sampler for a reset parameter by using the *Sampler Factory*, which maintains a dictionary mapping of string keys to sampler objects. The available samplers to be used for reset parameter resampling is as available in the Sampler Factory.
8183

84+
#### Possible Sampler Types
85+
86+
The currently implemented samplers that can be used with the `sampler-type` arguments are:
87+
88+
* `uniform` - Uniform sampler
89+
* Uniformly samples a single float value between defined endpoints.
90+
The sub-arguments for this sampler to specify the interval
91+
endpoints are as below. The sampling is done in the range of
92+
[`min_value`, `max_value`).
93+
94+
* **sub-arguments** - `min_value`, `max_value`
95+
96+
* `gaussian` - Gaussian sampler
97+
* Samples a single float value from the distribution characterized by
98+
the mean and standard deviation. The sub-arguments to specify the
99+
gaussian distribution to use are as below.
100+
101+
* **sub-arguments** - `mean`, `st_dev`
102+
103+
* `multirange_uniform` - Multirange Uniform sampler
104+
* Uniformly samples a single float value between the specified intervals.
105+
Samples by first performing a weight pick of an interval from the list
106+
of intervals (weighted based on interval width) and samples uniformly
107+
from the selected interval (half-closed interval, same as the uniform
108+
sampler). This sampler can take an arbitrary number of intervals in a
109+
list in the following format:
110+
[[`interval_1_min`, `interval_1_max`], [`interval_2_min`, `interval_2_max`], ...]
111+
112+
* **sub-arguments** - `intervals`
113+
114+
82115
The implementation of the samplers can be found at `ml-agents-envs/mlagents/envs/sampler_class.py`.
83116

84117
### Defining a new sampler method
@@ -115,10 +148,10 @@ With the sampler file setup, we can proceed to train our agent as explained in t
115148
116149
### Training with Generalization Learning
117150
118-
We first begin with setting up the sampler file. After the sampler file is defined and configured, we proceed by launching `mlagents-learn` and specify our configured sampler file with the `--sampler` flag. To demonstrate, if we wanted to train a 3D ball agent with generalization using the `config/generalization-test.yaml` sampling setup, we can run
151+
We first begin with setting up the sampler file. After the sampler file is defined and configured, we proceed by launching `mlagents-learn` and specify our configured sampler file with the `--sampler` flag. To demonstrate, if we wanted to train a 3D ball agent with generalization using the `config/3dball_generalize.yaml` sampling setup, we can run
119152

120153
```sh
121-
mlagents-learn config/trainer_config.yaml --sampler=config/generalize_test.yaml --run-id=3D-Ball-generalization --train
154+
mlagents-learn config/trainer_config.yaml --sampler=config/3dball_generalize.yaml --run-id=3D-Ball-generalization --train
122155
```
123156

124157
We can observe progress and metrics via Tensorboard.

docs/Training-ML-Agents.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,7 @@ are conducting, see:
196196
* [Training with PPO](Training-PPO.md)
197197
* [Using Recurrent Neural Networks](Feature-Memory.md)
198198
* [Training with Curriculum Learning](Training-Curriculum-Learning.md)
199-
* [Training with Generalization](Training-Generalization-Learning.md)
199+
* [Training with Environment Parameter Sampling](Training-Generalization-Learning.md)
200200
* [Training with Imitation Learning](Training-Imitation-Learning.md)
201201

202202
You can also compare the

0 commit comments

Comments
 (0)