You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/Learning-Environment-Create-New.md
+8-11Lines changed: 8 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -57,7 +57,7 @@ Next, we will create a very simple scene to act as our ML-Agents environment. Th
57
57
2. Name the GameObject "Target"
58
58
3. Select Target to view its properties in the Inspector window.
59
59
4. Set Transform to Position = (3,0.5,3), Rotation = (0,0,0), Scale = (1,1,1).
60
-
5. On the Cube's Mesh Renderer, expand the Materials property and change the default-material to *block*.
60
+
5. On the Cube's Mesh Renderer, expand the Materials property and change the default-material to *Block*.
61
61
62
62

63
63
@@ -114,15 +114,13 @@ The default settings for the Academy properties are also fine for this environme
114
114
115
115
## Add a Brain
116
116
117
-
The Brain object encapsulates the decision making process. An Agent sends its observations to its Brain and expects a decision in return. The Brain Type setting determines how the Brain makes decisions. Unlike the Academy and Agent classes, you don't make your own Brain subclasses. (You can extend CoreBrain to make your own *types* of Brain, but the four built-in brain types should cover almost all scenarios.)
117
+
The Brain object encapsulates the decision making process. An Agent sends its observations to its Brain and expects a decision in return. The Brain Type setting determines how the Brain makes decisions. Unlike the Academy and Agent classes, you don't make your own Brain subclasses.
118
118
119
119
To create the Brain:
120
120
121
-
1. Right-click the Academy GameObject in the Hierarchy window and choose *Create Empty* to add a child GameObject.
122
-
2. Name the new GameObject, "Brain".
123
-
3. Select the Brain GameObject to show its properties in the Inspector window.
124
-
4. Click **Add Component**.
125
-
5. Select the **Scripts/Brain** component to add it to the GameObject.
121
+
1. Select the Brain GameObject created earlier to show its properties in the Inspector window.
122
+
2. Click **Add Component**.
123
+
3. Select the **Scripts/Brain** component to add it to the GameObject.
126
124
127
125
We will come back to the Brain properties later, but leave the Brain Type as **Player** for now.
128
126
@@ -221,7 +219,6 @@ All the values are divided by 5 to normalize the inputs to the neural network to
221
219
222
220
In total, the state observation contains 8 values and we need to use the continuous state space when we get around to setting the Brain properties:
223
221
224
-
List<float> observation = new List<float>();
225
222
public override void CollectObservations()
226
223
{
227
224
@@ -247,7 +244,7 @@ The final part of the Agent code is the Agent.AgentAction() function, which rece
247
244
248
245
**Actions**
249
246
250
-
The decision of the Brain comes in the form of an action array passed to the `AgentAction()` function. The number of elements in this array is determined by the `Vector Action Space Type` and `Vector Action Space Size` settings of the agent's Brain. The RollerAgent uses the continuous vector action space and needs two continuous control signals from the brain. Thus, we will set the Brain `Vector Action Size` to 2. The first element,`action[0]` determines the force applied along the x axis; `action[1]` determines the force applied along the z axis. (If we allowed the agent to move in three dimensions, then we would need to set `Vector Action Size` to 3. Note the Brain really has no idea what the values in the action array mean. The training process adjust the action values in response to the observation input and then sees what kind of rewards it gets as a result.
247
+
The decision of the Brain comes in the form of an action array passed to the `AgentAction()` function. The number of elements in this array is determined by the `Vector Action Space Type` and `Vector Action Space Size` settings of the agent's Brain. The RollerAgent uses the continuous vector action space and needs two continuous control signals from the brain. Thus, we will set the Brain `Vector Action Size` to 2. The first element,`action[0]` determines the force applied along the x axis; `action[1]` determines the force applied along the z axis. (If we allowed the agent to move in three dimensions, then we would need to set `Vector Action Size` to 3. Note the Brain really has no idea what the values in the action array mean. The training process just adjusts the action values in response to the observation input and then sees what kind of rewards it gets as a result.
251
248
252
249
Before we can add a force to the agent, we need a reference to its Rigidbody component. A [Rigidbody](https://docs.unity3d.com/ScriptReference/Rigidbody.html) is Unity's primary element for physics simulation. (See [Physics](https://docs.unity3d.com/Manual/PhysicsSection.html) for full documentation of Unity physics.) A good place to set references to other components of the same GameObject is in the standard Unity `Start()` method:
253
250
@@ -360,10 +357,10 @@ Also, drag the Target GameObject from the Hierarchy window to the RollerAgent Ta
360
357
361
358
Finally, select the Brain GameObject so that you can see its properties in the Inspector window. Set the following properties:
362
359
360
+
*`Vector Observation Space Type` = **Continuous**
363
361
*`Vector Observation Space Size` = 8
364
-
*`Vector Action Space Size` = 2
365
362
*`Vector Action Space Type` = **Continuous**
366
-
*`Vector Observation Space Type` = **Continuous**
363
+
*`Vector Action Space Size` = 2
367
364
*`Brain Type` = **Player**
368
365
369
366
Now you are ready to test the environment before training.
0 commit comments