Skip to content

Commit 108b41b

Browse files
authored
Merge pull request #35 from Unity-Technologies/fix-docs
Move Wiki to Docs directory
2 parents 4b7d0c9 + ad45593 commit 108b41b

14 files changed

+737
-8
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,17 @@
11
<img src="images/unity-wide.png" align="middle" width="3000"/>
22

3-
# Unity ML - Agents
3+
# Unity ML - Agents (Beta)
44

55
**Unity Machine Learning Agents** allows researchers and developers to
66
create games and simulations using the Unity Editor which serve as
77
environments where intelligent agents can be trained using
88
reinforcement learning, neuroevolution, or other machine learning
99
methods through a simple-to-use Python API. For more information, see
10-
the [wiki page](../../wiki).
10+
the [documentation page](docs).
1111

1212
For a walkthrough on how to train an agent in one of the provided
1313
example environments, start
14-
[here](../../wiki/Getting-Started-with-Balance-Ball).
14+
[here](docs/Getting-Started-with-Balance-Ball.md).
1515

1616
## Features
1717
* Unity Engine flexibility and simplicity
@@ -27,12 +27,12 @@ example environments, start
2727
The _Agents SDK_, including example environment scenes is located in
2828
`unity-environment` folder. For requirements, instructions, and other
2929
information, see the contained Readme and the relevant
30-
[wiki page](../../wiki/Making-a-new-Unity-Environment).
30+
[documentation](docs/Making-a-new-Unity-Environment.md).
3131

3232
## Training your Agents
3333

3434
Once you've built a Unity Environment, example Reinforcement Learning
3535
algorithms and the Python API are available in the `python`
3636
folder. For requirements, instructions, and other information, see the
3737
contained Readme and the relevant
38-
[wiki page](../../wiki/Unity-Agents---Python-API).
38+
[documentation](docs/Unity-Agents---Python-API.md).

docs/Agents-Editor-Interface.md

Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
# ML Agents Editor Interface
2+
3+
This page contains an explanation of the use of each of the inspector panels relating to the `Academy`, `Brain`, and `Agent` objects.
4+
5+
## Academy
6+
7+
![Academy Inspector](../images/academy.png)
8+
9+
* `Max Steps` - Total number of steps per-episode. `0` corresponds to episodes without a maximum number
10+
of steps. Once the step counter reaches maximum, the environment will reset.
11+
* `Frames To Skip` - How many steps of the environment to skip before asking Brains for decisions.
12+
* `Wait Time` - How many seconds to wait between steps when running in `Inference`.
13+
* `Configuration` - The engine-level settings which correspond to rendering quality and engine speed.
14+
* `Width` - Width of the environment window in pixels.
15+
* `Height` - Width of the environment window in pixels.
16+
* `Quality Level` - Rendering quality of environment. (Higher is better)
17+
* `Time Scale` - Speed at which environment is run. (Higher is faster)
18+
* `Target Frame Rate` - FPS engine attempts to maintain.
19+
* `Default Reset Parameters` - List of custom parameters that can be changed in the environment on reset.
20+
21+
## Brain
22+
23+
![Brain Inspector](../images/brain.png)
24+
25+
* `Brain Parameters` - Define state, observation, and action spaces for the Brain.
26+
* `State Size` - Length of state vector for brain (In _Continuous_ state space). Or number of possible
27+
values (in _Discrete_ state space).
28+
* `Action Size` - Length of action vector for brain (In _Continuous_ state space). Or number of possible
29+
values (in _Discrete_ action space).
30+
* `Memory Size` - Length of memory vector for brain. Used with Recurrent networks and frame-stacking CNNs.
31+
* `Camera Resolution` - Describes height, width, and whether to greyscale visual observations for the Brain.
32+
* `Action Descriptions` - A list of strings used to name the available actions for the Brain.
33+
* `State Space Type` - Corresponds to whether state vector contains a single integer (Discrete) or a series of real-valued floats (Continuous).
34+
* `Action Space Type` - Corresponds to whether action vector contains a single integer (Discrete) or a series of real-valued floats (Continuous).
35+
* `Type of Brain` - Describes how Brain will decide actions.
36+
* `External` - Actions are decided using Python API.
37+
* `Internal` - Actions are decided using internal TensorflowSharp model.
38+
* `Player` - Actions are decided using Player input mappings.
39+
* `Heuristic` - Actions are decided using custom `Decision` script, which should be attached to the Brain game object.
40+
41+
### Internal Brain
42+
43+
![Internal Brain Inspector](../images/internal_brain.png)
44+
45+
* `Graph Model` : This must be the `bytes` file corresponding to the pretrained Tensorflow graph. (You must first drag this file into your Resources folder and then from the Resources folder into the inspector)
46+
* `Graph Scope` : If you set a scope while training your tensorflow model, all your placeholder name will have a prefix. You must specify that prefix here.
47+
* `Batch Size Node Name` : If the batch size is one of the inputs of your graph, you must specify the name if the placeholder here. The brain will make the batch size equal to the number of agents connected to the brain automatically.
48+
* `State Node Name` : If your graph uses the state as an input, you must specify the name if the placeholder here.
49+
* `Recurrent Input Node Name` : If your graph uses a recurrent input / memory as input and outputs new recurrent input / memory, you must specify the name if the input placeholder here.
50+
* `Recurrent Output Node Name` : If your graph uses a recurrent input / memory as input and outputs new recurrent input / memory, you must specify the name if the output placeholder here.
51+
* `Observation Placeholder Name` : If your graph uses observations as input, you must specify it here. Note that the number of observations is equal to the length of `Camera Resolutions` in the brain parameters.
52+
* `Action Node Name` : Specify the name of the placeholder corresponding to the actions of the brain in your graph. If the action space type is continuous, the output must be a one dimensional tensor of float of length `Action Space Size`, if the action space type is discrete, the output must be a one dimensional tensor of int of length 1.
53+
* `Graph Placeholder` : If your graph takes additional inputs that are fixed (example: noise level) you can specify them here. Note that in your graph, these must correspond to one dimensional tensors of int or float of size 1.
54+
* `Name` : Corresponds to the name of the placeholdder.
55+
* `Value Type` : Either Integer or Floating Point.
56+
* `Min Value` and `Max Value` : Specify the range of the value here. The value will be sampled from the uniform distribution ranging from `Min Value` to `Max Value` inclusive.
57+
58+
59+
### Player Brain
60+
61+
![Player Brain Inspector](../images/player_brain.png)
62+
63+
If the action space is discrete, you must map input keys to their corresponding integer values. If the action space is continuous, you must map input keys to their corresponding indices and float values.
64+
65+
## Agent
66+
67+
![Agent Inspector](../images/agent.png)
68+
69+
* `Brain` - The brain to register this agent to. Can be dragged into the inspector using the Editor.
70+
* `Observations` - A list of `Cameras` which will be used to generate observations.
71+
* `Max Step` - The per-agent maximum number of steps. Once this number is reached, the agent will be reset if `Reset On Done` is checked.

docs/Example-Environments.md

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
# Example Learning Environments
2+
3+
### About Example Environments
4+
Unity ML Agents currently contains three example environments which demonstrate various features of the platform. In the coming months more will be added. We are also actively open to adding community contributed environments as examples, as long as they are small, simple, demonstrate a unique feature of the platform, and provide a unique non-trivial challenge to modern RL algorithms. Feel free to submit these environments with a Pull-Request explaining the nature of the environment and task.
5+
6+
Environments are located in `unity-environment/ML-Agents/Examples`.
7+
8+
## 3DBall
9+
10+
![Balance Ball](../images/balance.png)
11+
12+
* Set-up: A balance-ball task, where the agent controls the platform.
13+
* Goal: The agent must balance the platform in order to keep the ball on it for as long as possible.
14+
* Agents: The environment contains 12 agents of the same kind, all linked to a single brain.
15+
* Agent Reward Function:
16+
* +0.1 for every step the ball remains on the platform.
17+
* -1.0 if the ball falls from the platform.
18+
* Brains: One brain with the following state/action space.
19+
* State space: (Continuous) 8 variables corresponding to rotation of platform, and position, rotation, and velocity of ball.
20+
* Action space: (Continuous) Size of 2, with one value corresponding to X-rotation, and the other to Z-rotation.
21+
* Observations: 0
22+
* Reset Parameters: None
23+
24+
## GridWorld
25+
26+
![GridWorld](../images/gridworld.png)
27+
28+
* Set-up: A version of the classic grid-world task. Scene contains agent, goal, and obstacles.
29+
* Goal: The agent must navigate the grid to the goal while avoiding the obstacles.
30+
* Agents: The environment contains one agent linked to a single brain.
31+
* Agent Reward Function:
32+
* -0.01 for every step.
33+
* +1.0 if the agent navigates to the goal position of the grid (episode ends).
34+
* -1.0 if the agent navigates to an obstacle (episode ends).
35+
* Brains: One brain with the following state/action space.
36+
* State space: (Continuous) 6 variables corresponding to position of agent and nearest goal and obstacle.
37+
* Action space: (Discrete) Size of 4, corresponding to movement in cardinal directions.
38+
* Observations: One corresponding to top-down view of GridWorld.
39+
* Reset Parameters: Three, corresponding to grid size, number of obstacles, and number of goals.
40+
41+
42+
## Tennis
43+
44+
![Tennis](../images/tennis.png)
45+
46+
* Set-up: Two-player game where agents control rackets to bounce ball over a net.
47+
* Goal: The agents must bounce ball between one another while not dropping or sending ball out of bounds.
48+
* Agents: The environment contains two agent linked to a single brain.
49+
* Agent Reward Function (independent):
50+
* -0.1 To last agent to hit ball before going out of bounds or hitting ground/net (episode ends).
51+
* +0.1 To agent when hitting ball after ball was hit by the other agent.
52+
* +0.1 To agent who didn't hit ball last when ball hits ground.
53+
* Brains: One brain with the following state/action space.
54+
* State space: (Continuous) 6 variables corresponding to position of agent and nearest goal and obstacle.
55+
* Action space: (Discrete) Size of 4, corresponding to movement toward net, away from net, jumping, and no-movement.
56+
* Observations: None
57+
* Reset Parameters: One, corresponding to size of ball.
58+
Lines changed: 135 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,135 @@
1+
# Getting Started with the Balance Ball Example
2+
3+
![Balance Ball](../images/balance.png)
4+
5+
This tutorial will walk through the end-to-end process of installing Unity Agents, building an example environment, training an agent in it, and finally embedding the trained model into the Unity environment.
6+
7+
Unity ML Agents contains a number of example environments which can be used as templates for new environments, or as ways to test a new ML algorithm to ensure it is functioning correctly.
8+
9+
In this walkthrough we will be using the **3D Balance Ball** environment. The environment contains a number of platforms and balls. Platforms can act to keep the ball up by rotating either horizontally or vertically. Each platform is an agent which is rewarded the longer it can keep a ball balanced on it, and provided a negative reward for dropping the ball. The goal of the training process is to have the platforms learn to never drop the ball.
10+
11+
Let's get started!
12+
13+
## Getting Unity ML Agents
14+
### Start by installing **Unity 2017.1** or later (required)
15+
16+
Download link available [here](https://store.unity.com/download?ref=update).
17+
18+
If you are new to using the Unity Editor, you can find the general documentation [here](https://docs.unity3d.com/Manual/index.html).
19+
20+
### Clone the repository
21+
Once installed, you will want to clone the Agents GitHub repository. References will be made throughout to `unity-environment` and `python` directories. Both are located at the root of the repository.
22+
23+
## Building Unity Environment
24+
Launch the Unity Editor, and log in, if necessary.
25+
26+
1. Open the `unity-environment` folder using the Unity editor. *(If this is not first time running Unity, you'll be able to skip most of these immediate steps, choose directly from the list of recently opened projects)*
27+
- On the initial dialog, choose `Open` on the top options
28+
- On the file dialog, choose `unity-environment` and click `Open` *(It is safe to ignore any warning message about non-matching editor installation)*
29+
- Once the project is open, on the `Project` panel (bottom of the tool), navigate to the folder `Assets/ML-Agents/Examples/3DBall/`
30+
- Double-click the `Scene` icon (Unity logo) to load all environment assets
31+
2. Go to `Edit -> Project Settings -> Player`
32+
- Ensure that `Resolution and Presentation -> Run in Background` is Checked.
33+
- Ensure that `Resolution and Presentation -> Display Resolution Dialog` is set to Disabled.
34+
3. Expand the `Ball3DAcademy` GameObject and locate its child object `Ball3DBrain` within the Scene hierarchy in the editor. Ensure Type of Brain for this object is set to `External`.
35+
4. *File -> Build Settings*
36+
5. Choose your target platform:
37+
- (optional) Select “Developer Build” to log debug messages.
38+
6. Click *Build*:
39+
- Save environment binary to the `python` sub-directory of the cloned repository *(you may need to click on the down arrow on the file chooser to be able to select that folder)*
40+
41+
## Installing Python API
42+
In order to train an agent within the framework, you will need to install Python 2 or 3, and the dependencies described below.
43+
44+
### Windows Users
45+
46+
If you are a Windows user who is new to Python/TensorFlow, follow [this guide](https://nitishmutha.github.io/tensorflow/2017/01/22/TensorFlow-with-gpu-for-windows.html) to set up your Python environment.
47+
48+
### Requirements
49+
* Jupyter
50+
* Matplotlib
51+
* numpy
52+
* Pillow
53+
* Python (2 or 3)
54+
* scipy
55+
* TensorFlow (1.0+)
56+
57+
### Installing Dependencies
58+
To install dependencies, go into the `python` directory and run:
59+
60+
`pip install .`
61+
62+
or
63+
64+
`pip3 install .`
65+
66+
If your Python environment doesn't include `pip`, see these [instructions](https://packaging.python.org/guides/installing-using-linux-tools/#installing-pip-setuptools-wheel-with-linux-package-managers) on installing it.
67+
68+
Once dependencies are installed, you are ready to test the Ball balance environment from Python.
69+
70+
### Testing Python API
71+
72+
To launch jupyter, run in the command line:
73+
74+
`jupyter notebook`
75+
76+
Then navigate to `localhost:8888` to access the notebooks. If you're new to jupyter, check out the [quick start guide](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/execute.html) before you continue.
77+
78+
To ensure that your environment and the Python API work as expected, you can use the `python/Basics` Jupyter notebook. This notebook contains a simple walkthrough of the functionality of the API. Within `Basics`, be sure to set `env_name` to the name of the environment file you built earlier.
79+
80+
## Training the Brain with Reinforcement Learning
81+
82+
### Training with PPO
83+
In order to train an agent to correctly balance the ball, we will use a Reinforcement Learning algorithm called Proximal Policy Optimization (PPO). This is a method that has been shown to be safe, efficient, and more general purpose than many other RL algorithms, as such we have chosen it as the example algorithm for use with ML Agents. For more information on PPO, OpenAI has a recent [blog post](https://blog.openai.com/openai-baselines-ppo/) explaining it.
84+
85+
In order to train the agents within the Ball Balance environment:
86+
87+
1. Open `python/PPO.ipynb` notebook from Jupyter.
88+
2. Set `env_name` to whatever you named your environment file.
89+
3. (optional) Set `run_path` directory to your choice.
90+
4. Run all cells of notebook except for final.
91+
92+
### Observing Training Progress
93+
In order to observe the training process in more detail, you can use Tensorboard.
94+
In your command line, run :
95+
96+
`tensorboard --logdir='summaries`
97+
98+
Then navigate to `localhost:6006`.
99+
100+
From Tensorboard, you will see the summary statistics of six variables:
101+
* Cumulative Reward - The mean cumulative episode reward over all agents. Should increase during a successful training session.
102+
* Value Loss - The mean loss of the value function update. Correlates to how well the model is able to predict the value of each state. This should decrease during a succesful training session.
103+
* Policy Loss - The mean loss of the policy function update. Correlates to how much the policy (process for deciding actions) is changing. The magnitude of this should decrease during a succesful training session.
104+
* Episode Length - The mean length of each episode in the environment for all agents.
105+
* Value Estimates - The mean value estimate for all states visited by the agent. Should increase during a successful training session.
106+
* Policy Entropy - How random the decisions of the model are. Should slowly decrease during a successful training process. If it decreases too quickly, the `beta` hyperparameter should be increased.
107+
108+
## Embedding Trained Brain into Unity Environment _[Experimental]_
109+
Once the training process displays an average reward of ~75 or greater, and there has been a recently saved model (denoted by the `Saved Model` message) you can choose to stop the training process by stopping the cell execution. Once this is done, you now have a trained TensorFlow model. You must now convert the saved model to a Unity-ready format which can be embedded directly into the Unity project by following the steps below.
110+
111+
### Setting up TensorFlowSharp Support
112+
Because TensorFlowSharp support is still experimental, it is disabled by default. In order to enable it, you must follow these steps. Please note that the `Internal` Brain mode will only be available once completing these steps.
113+
114+
1. Make sure you are using Unity 2017.1 or newer.
115+
2. Make sure the TensorFlowSharp plugin is in your Asset folder. A Plugins folder which includes TF# can be downloaded [here](https://s3.amazonaws.com/unity-agents/TFSharpPlugin.unitypackage).
116+
3. Go to `Edit` -> `Project Settings` -> `Player`
117+
4. For each of the platforms you target (**`PC, Mac and Linux Standalone`**, **`iOS`** or **`Android`**):
118+
1. Go into `Other Settings`.
119+
2. Select `Scripting Runtime Version` to `Experimental (.NET 4.6 Equivalent)`
120+
3. In `Scripting Defined Symbols`, add the flag `ENABLE_TENSORFLOW`
121+
5. Restart the Unity Editor.
122+
123+
### Embedding the trained model into Unity
124+
125+
1. Run the final cell of the notebook under "Export the trained TensorFlow graph" to produce an `<env_name >.bytes` file.
126+
2. Move `<env_name>.bytes` from `python/models/...` into `unity-environment/Assets/ML-Agents/Examples/3DBall/TFModels/`.
127+
3. Open the Unity Editor, and select the `3DBall` scene as described above.
128+
4. Select the `3DBallBrain` object from the Scene hierarchy.
129+
5. Change the `Type of Brain` to `Internal`.
130+
6. Drag the `<env_name>.bytes` file from the Project window of the Editor to the `Graph Model` placeholder in the `3DBallBrain` inspector window.
131+
7. Set the `Graph Placeholder` size to 1.
132+
8. Add a placeholder called `epsilon` with a type of `floating point` and a range of values from 0 to 0.
133+
9. Press the Play button at the top of the editor.
134+
135+
If you followed these steps correctly, you should now see the trained model being used to control the behavior of the balance ball within the Editor itself. From here you can re-build the Unity binary, and run it standalone with your agent's new learned behavior built right in.

0 commit comments

Comments
 (0)