You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/Getting-Started-with-Balance-Ball.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -77,9 +77,9 @@ Once the training process displays an average reward of ~75 or greater, and ther
77
77
Because TensorFlowSharp support is still experimental, it is disabled by default. In order to enable it, you must follow these steps. Please note that the `Internal` Brain mode will only be available once completing these steps.
78
78
79
79
1. Make sure you are using Unity 2017.1 or newer.
80
-
2. Make sure the TensorFlowSharp plugin is in your `Assets` folder. A Plugins folder which includes TF# can be downloaded [here](https://s3.amazonaws.com/unity-agents/TFSharpPlugin.unitypackage). Double click and import it once downloaded.
80
+
2. Make sure the TensorFlowSharp plugin is in your `Assets` folder. A Plugins folder which includes TF# can be downloaded [here](https://s3.amazonaws.com/unity-agents/0.2/TFSharpPlugin.unitypackage). Double click and import it once downloaded.
81
81
3. Go to `Edit` -> `Project Settings` -> `Player`
82
-
4. For each of the platforms you target (**`PC, Mac and Linux Standalone`**, **`iOS`** or **`Android`**):
82
+
4. For each of the platforms you target (**`PC, Mac and Linux Standalone`**, **`iOS`** or **`Android`**):
83
83
1. Go into `Other Settings`.
84
84
2. Select `Scripting Runtime Version` to `Experimental (.NET 4.6 Equivalent)`
85
85
3. In `Scripting Defined Symbols`, add the flag `ENABLE_TENSORFLOW`
Copy file name to clipboardExpand all lines: docs/Making-a-new-Unity-Environment.md
+11-14Lines changed: 11 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,9 +4,9 @@ This tutorial walks through the process of creating a Unity Environment. A Unity
4
4
5
5
## Setting up the Unity Project
6
6
7
-
1. Open an existing Unity project, or create a new one and import the RL interface package:
8
-
*[ML-Agents package without TensorflowSharp](https://s3.amazonaws.com/unity-agents/ML-AgentsNoPlugin.unitypackage)
9
-
*[ML-Agents package with TensorflowSharp](https://s3.amazonaws.com/unity-agents/ML-AgentsWithPlugin.unitypackage)
7
+
1. Open an existing Unity project, or create a new one and import the RL interface package:
8
+
*[ML-Agents package without TensorflowSharp](https://s3.amazonaws.com/unity-agents/0.2/ML-AgentsNoPlugin.unitypackage)
9
+
*[ML-Agents package with TensorflowSharp](https://s3.amazonaws.com/unity-agents/0.2/ML-AgentsWithPlugin.unitypackage)
10
10
11
11
2. Rename `TemplateAcademy.cs` (and the contained class name) to the desired name of your new academy class. All Template files are in the folder `Assets -> Template -> Scripts`. Typical naming convention is `YourNameAcademy`.
12
12
@@ -23,11 +23,11 @@ This tutorial walks through the process of creating a Unity Environment. A Unity
23
23
24
24
6. If you will be using Tensorflow Sharp in Unity, you must:
25
25
1. Make sure you are using Unity 2017.1 or newer.
26
-
2. Make sure the TensorflowSharp plugin is in your Asset folder. It can be downloaded [here](https://s3.amazonaws.com/unity-agents/TFSharpPlugin.unitypackage).
26
+
2. Make sure the TensorflowSharp [plugin](https://s3.amazonaws.com/unity-agents/0.2/TFSharpPlugin.unitypackage) is in your Asset folder.
27
27
3. Go to `Edit` -> `Project Settings` -> `Player`
28
-
4. For each of the platforms you target (**`PC, Mac and Linux Standalone`**, **`iOS`** or **`Android`**):
28
+
4. For each of the platforms you target (**`PC, Mac and Linux Standalone`**, **`iOS`** or **`Android`**):
29
29
1. Go into `Other Settings`.
30
-
2. Select `Scripting Runtime Version` to `Experimental (.NET 4.6 Equivalent)`
30
+
2. Select `Scripting Runtime Version` to `Experimental (.NET 4.6 Equivalent)`
31
31
3. In `Scripting Defined Symbols`, add the flag `ENABLE_TENSORFLOW`
32
32
5. Note that some of these changes will require a Unity Restart
33
33
@@ -53,11 +53,11 @@ This tutorial walks through the process of creating a Unity Environment. A Unity
53
53
4. Within **`AcademyReset()`**, you can reset the environment for a new episode. It should contain environment-specific code for setting up the environment. Note that `AcademyReset()` is called at the beginning of the training session to ensure the first episode is similar to the others.
54
54
55
55
## Implementing `YourNameBrain`
56
-
For each Brain game object in your academy :
56
+
For each Brain game object in your academy :
57
57
58
58
1. Click on the game object `YourNameBrain`
59
59
60
-
2. In the inspector tab, you can modify the characteristics of the brain in **`Brain Parameters`**
60
+
2. In the inspector tab, you can modify the characteristics of the brain in **`Brain Parameters`**
61
61
*`State Size` Number of variables within the state provided to the agent(s).
62
62
*`Action Size` The number of possible actions for each individual agent to take.
63
63
*`Memory Size` The number of floats the agents will remember each step.
@@ -73,7 +73,7 @@ For each Brain game object in your academy :
73
73
*`Heuristic` : You can have your brain automatically react to the observations and states in a customizable way. You will need to drag a `Decision` script into `YourNameBrain`. To create a custom reaction, you must :
74
74
* Rename `TemplateDecision.cs` (and the contained class name) to the desired name of your new reaction. Typical naming convention is `YourNameDecision`.
75
75
* Implement `Decide`: Given the state, observation and memory of an agent, this function must return an array of floats corresponding to the actions taken by the agent. If the action space type is discrete, the array must be of size 1.
76
-
* Optionally, implement `MakeMemory`: Given the state, observation and memory of an agent, this function must return an array of floats corresponding to the new memories of the agent.
76
+
* Optionally, implement `MakeMemory`: Given the state, observation and memory of an agent, this function must return an array of floats corresponding to the new memories of the agent.
77
77
*`Internal` : Note that you must have Tensorflow Sharp setup (see top of this page). Here are the fields that must be completed:
78
78
*`Graph Model` : This must be the `bytes` file corresponding to the pretrained Tensorflow graph. (You must first drag this file into your Resources folder and then from the Resources folder into the inspector)
79
79
*`Graph Scope` : If you set a scope while training your tensorflow model, all your placeholder name will have a prefix. You must specify that prefix here.
@@ -87,7 +87,7 @@ For each Brain game object in your academy :
87
87
*`Name` : Corresponds to the name of the placeholder.
88
88
*`Value Type` : Either Integer or Floating Point.
89
89
*`Min Value` and 'Max Value' : Specify the minimum and maximum values (included) the placeholder can take. The value will be sampled from the uniform distribution at each step. If you want this value to be fixed, set both `Min Value` and `Max Value` to the same number.
90
-
90
+
91
91
## Implementing `YourNameAgent`
92
92
93
93
1. Rename `TemplateAgent.cs` (and the contained class name) to the desired name of your new agent. Typical naming convention is `YourNameAgent`.
@@ -103,7 +103,7 @@ For each Brain game object in your academy :
103
103
6. Implement the following functions in `YourNameAgent.cs` :
104
104
*`InitializeAgent()` : Use this method to initialize your agent. This method is called when the agent is created. Do **not** use `Awake()`, `Start()` or `OnEnable()`.
105
105
*`CollectState()` : Must return a list of floats corresponding to the state the agent is in. If the state space type is discrete, return a list of length 1 containing the float equivalent of your state.
106
-
*`AgentStep()` : This function will be called every frame, you must define what your agent will do given the input actions. You must also specify the rewards and whether or not the agent is done. To do so, modify the public fields of the agent `reward` and `done`.
106
+
*`AgentStep()` : This function will be called every frame, you must define what your agent will do given the input actions. You must also specify the rewards and whether or not the agent is done. To do so, modify the public fields of the agent `reward` and `done`.
107
107
*`AgentReset()` : This function is called at start, when the Academy resets and when the agent is done (if `Reset On Done` is checked).
108
108
*`AgentOnDone()` : If `Reset On Done` is not checked, this function will be called when the agent is done. `Reset()` will only be called when the Academy resets.
109
109
@@ -125,6 +125,3 @@ The reward function is the set of circumstances and event which we want to rewar
125
125
Small negative rewards are also typically used each step in scenarios where the optimal agent behavior is to complete an episode as quickly as possible.
126
126
127
127
Note that the reward is reset to 0 at every step, you must add to the reward (`reward += rewardIncrement`). If you use `skipFrame` in the Academy and set your rewards instead of incrementing them, you might lose information since the reward is sent at every step, not at every frame.
In order to bring a fully trained agent back into Unity, you will need to make sure the nodes of your graph have appropriate names. You can give names to nodes in Tensorflow :
21
+
In order to bring a fully trained agent back into Unity, you will need to make sure the nodes of your graph have appropriate names. You can give names to nodes in Tensorflow :
@@ -53,19 +53,19 @@ Your model will be saved with the name `your_name_graph.bytes` and will contain
53
53
54
54
Go to `Edit` -> `Player Settings` and add `ENABLE_TENSORFLOW` to the `Scripting Define Symbols` for each type of device you want to use (**`PC, Mac and Linux Standalone`**, **`iOS`** or **`Android`**).
55
55
56
-
Set the Brain you used for training to `Internal`. Drag `your_name_graph.bytes` into Unity and then drag it into The `Graph Model` field in the Brain. If you used a scope when training you graph, specify it in the `Graph Scope` field. Specify the names of the nodes you used in your graph. If you followed these instructions well, the agents in your environment that use this brain will use you fully trained network to make decisions.
56
+
Set the Brain you used for training to `Internal`. Drag `your_name_graph.bytes` into Unity and then drag it into The `Graph Model` field in the Brain. If you used a scope when training you graph, specify it in the `Graph Scope` field. Specify the names of the nodes you used in your graph. If you followed these instructions well, the agents in your environment that use this brain will use you fully trained network to make decisions.
57
57
58
58
# iOS additional instructions for building
59
59
60
-
* Once you build for iOS in the editor, Xcode will launch.
60
+
* Once you build for iOS in the editor, Xcode will launch.
61
61
* In `General` -> `Linked Frameworks and Libraries`:
62
62
* Add a framework called `Framework.accelerate`
63
63
* Remove the library `libtensorflow-core.a`
64
64
* In `Build Settings`->`Linking`->`Other Linker Flags`:
65
65
* Double Click on the flag list
66
66
* Type `-force_load`
67
67
* Drag the library `libtensorflow-core.a` from the `Project Navigator` on the left under `Libraries/ML-Agents/Plugins/iOS` into the flag list.
68
-
68
+
69
69
# Using TensorflowSharp without ML-Agents
70
70
71
71
Beyond controlling an in-game agent, you may desire to use TensorFlowSharp for more general computation. The below instructions describe how to generally embed Tensorflow models without using the ML-Agents framework.
@@ -77,7 +77,7 @@ You must have a Tensorflow graph `your_name_graph.bytes` made using Tensorflow's
77
77
Put the file `your_name_graph.bytes` into Resources.
78
78
79
79
In your C# script :
80
-
At the top, add the line
80
+
At the top, add the line
81
81
```csharp
82
82
usingTensorflow;
83
83
```
@@ -87,7 +87,7 @@ If you will be building for android, you must add this block at the start of you
87
87
TensorFlowSharp.Android.NativeBinding.Init();
88
88
#endif
89
89
```
90
-
Put your graph as a text asset in the variable `graphModel`. You can do so in the inspector by making `graphModel` a public variable and dragging you asset in the inspector or load it from the Resources folder :
90
+
Put your graph as a text asset in the variable `graphModel`. You can do so in the inspector by making `graphModel` a public variable and dragging you asset in the inspector or load it from the Resources folder :
Copy file name to clipboardExpand all lines: unity-environment/README.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,14 +27,14 @@ The `Examples` subfolder contains a set of example environments to use either as
27
27
For more informoation on each of these environments, see this [documentation page](../docs/Example-Environments.md).
28
28
29
29
Within `ML-Agents/Template` there also exists:
30
-
***Template** - An empty Unity scene with a single _Academy_, _Brain_, and _Agent_. Designed to be used as a template for new environments.
30
+
***Template** - An empty Unity scene with a single _Academy_, _Brain_, and _Agent_. Designed to be used as a template for new environments.
31
31
32
-
## Agents SDK Package
33
-
A link to Unity package containing the Agents SDK for Unity 2017.1 can be downloaded here :
34
-
*[ML-Agents package without TensorflowSharp](https://s3.amazonaws.com/unity-agents/ML-AgentsNoPlugin.unitypackage)
35
-
*[ML-Agents package with TensorflowSharp](https://s3.amazonaws.com/unity-agents/ML-AgentsWithPlugin.unitypackage)
32
+
## Agents SDK
33
+
A link to Unity package containing the Agents SDK for Unity 2017.1 can be downloaded here :
34
+
*[ML-Agents package without TensorflowSharp](https://s3.amazonaws.com/unity-agents/0.2/ML-AgentsNoPlugin.unitypackage)
35
+
*[ML-Agents package with TensorflowSharp](https://s3.amazonaws.com/unity-agents/0.2/ML-AgentsWithPlugin.unitypackage)
36
36
37
-
For information on the use of each script, see the comments and documentation within the files themselves, or read the [documentation](../../../wiki).
37
+
For information on the use of each script, see the comments and documentation within the files themselves, or read the [documentation](../../../wiki).
38
38
39
39
## Creating your own Unity Environment
40
40
For information on how to create a new Unity Environment, see the walkthrough [here](../docs/Making-a-new-Unity-Environment.md). If you have questions or run into issues, please feel free to create issues through the repo, and we will do our best to address them.
@@ -43,10 +43,10 @@ For information on how to create a new Unity Environment, see the walkthrough [h
43
43
If you will be using Tensorflow Sharp in Unity, you must:
44
44
45
45
1. Make sure you are using Unity 2017.1 or newer.
46
-
2. Make sure the TensorflowSharp plugin is in your Asset folder. A Plugins folder which includes TF# can be downloaded [here](https://s3.amazonaws.com/unity-agents/TFSharpPlugin.unitypackage).
46
+
2. Make sure the TensorflowSharp [plugin](https://s3.amazonaws.com/unity-agents/0.2/TFSharpPlugin.unitypackage) is in your Asset folder.
47
47
3. Go to `Edit` -> `Project Settings` -> `Player`
48
-
4. For each of the platforms you target (**`PC, Mac and Linux Standalone`**, **`iOS`** or **`Android`**):
48
+
4. For each of the platforms you target (**`PC, Mac and Linux Standalone`**, **`iOS`** or **`Android`**):
49
49
1. Go into `Other Settings`.
50
-
2. Select `Scripting Runtime Version` to `Experimental (.NET 4.6 Equivalent)`
50
+
2. Select `Scripting Runtime Version` to `Experimental (.NET 4.6 Equivalent)`
51
51
3. In `Scripting Defined Symbols`, add the flag `ENABLE_TENSORFLOW`
0 commit comments