Skip to content

Commit 273e3f4

Browse files
author
Deric Pang
authored
Fixing tables in documentation and other markdown errors. (#1199)
1 parent ac929ed commit 273e3f4

16 files changed

+91
-92
lines changed

docs/Basic-Guide.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -103,9 +103,11 @@ communicate with the external training process when making their decisions.
103103
- `--train` tells `mlagents-learn` to run a training session (rather
104104
than inference)
105105
4. If you cloned the ML-Agents repo, then you can simply run
106+
106107
```sh
107108
mlagents-learn config/trainer_config.yaml --run-id=firstRun --train
108109
```
110+
109111
5. When the message _"Start training by pressing the Play button in the Unity
110112
Editor"_ is displayed on the screen, you can press the :arrow_forward: button
111113
in Unity to start training in the Editor.

docs/FAQ.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ Within Unity](Installation.md#setting-up-ml-agent-within-unity) for solution.
3232
If you try to use ML-Agents in Unity versions 2017.1 - 2017.3, you might
3333
encounter an error that looks like this:
3434

35-
```console
35+
```console
3636
Instance of CoreBrainInternal couldn't be created. The the script
3737
class needs to derive from ScriptableObject.
3838
UnityEngine.ScriptableObject:CreateInstance(String)

docs/Feature-Memory.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
## What are memories used for?
44

55
Have you ever entered a room to get something and immediately forgot what you
6-
were looking for? Don't let that happen to your agents.
6+
were looking for? Don't let that happen to your agents.
77

88
It is now possible to give memories to your agents. When training, the agents
99
will be able to store a vector of floats to be used next time they need to make

docs/Getting-Started-with-Balance-Ball.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ and Unity, see the [installation instructions](Installation.md).
3333
An agent is an autonomous actor that observes and interacts with an
3434
_environment_. In the context of Unity, an environment is a scene containing an
3535
Academy and one or more Brain and Agent objects, and, of course, the other
36-
entities that an agent interacts with.
36+
entities that an agent interacts with.
3737

3838
![Unity Editor](images/mlagents-3DBallHierarchy.png)
3939

docs/Installation.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,9 @@ Build Support_ component when installing Unity.
2020

2121
Once installed, you will want to clone the ML-Agents Toolkit GitHub repository.
2222

23-
git clone https://github.com/Unity-Technologies/ml-agents.git
23+
```sh
24+
git clone https://github.com/Unity-Technologies/ml-agents.git
25+
```
2426

2527
The `UnitySDK` subdirectory contains the Unity Assets to add to your projects.
2628
It also contains many [example environments](Learning-Environment-Examples.md)
@@ -65,7 +67,9 @@ on installing it.
6567
To install the dependencies and `mlagents` Python package, enter the
6668
`ml-agents/` subdirectory and run from the command line:
6769

68-
pip3 install .
70+
```sh
71+
pip3 install .
72+
```
6973

7074
If you installed this correctly, you should be able to run
7175
`mlagents-learn --help`

docs/Learning-Environment-Create-New.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ steps:
3131
in the scene that represents the Agent in the simulation. Each Agent object
3232
must be assigned a Brain object.
3333
6. If training, set the Brain type to External and
34-
[run the training process](Training-ML-Agents.md).
34+
[run the training process](Training-ML-Agents.md).
3535

3636
**Note:** If you are unfamiliar with Unity, refer to
3737
[Learning the interface](https://docs.unity3d.com/Manual/LearningtheInterface.html)
@@ -243,7 +243,7 @@ public class RollerAgent : Agent
243243
public override void AgentReset()
244244
{
245245
if (this.transform.position.y < -1.0)
246-
{
246+
{
247247
// The Agent fell
248248
this.transform.position = Vector3.zero;
249249
this.rBody.angularVelocity = Vector3.zero;
@@ -550,9 +550,9 @@ your Unity environment.
550550
There are three kinds of game objects you need to include in your scene in order
551551
to use Unity ML-Agents:
552552

553-
* Academy
554-
* Brain
555-
* Agents
553+
* Academy
554+
* Brain
555+
* Agents
556556

557557
Keep in mind:
558558

docs/Learning-Environment-Design-Brains.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ The Brain encapsulates the decision making process. Brain objects must be
44
children of the Academy in the Unity scene hierarchy. Every Agent must be
55
assigned a Brain, but you can use the same Brain with more than one Agent. You
66
can also create several Brains, attach each of the Brain to one or more than one
7-
Agent.
7+
Agent.
88

99
Use the Brain class directly, rather than a subclass. Brain behavior is
1010
determined by the **Brain Type**. The ML-Agents toolkit defines four Brain
@@ -71,7 +71,7 @@ to a Brain component:
7171

7272
The Player, Heuristic and Internal Brains have been updated to support
7373
broadcast. The broadcast feature allows you to collect data from your Agents
74-
using a Python program without controlling them.
74+
using a Python program without controlling them.
7575

7676
### How to use: Unity
7777

@@ -85,17 +85,17 @@ When you launch your Unity Environment from a Python program, you can see what
8585
the Agents connected to non-External Brains are doing. When calling `step` or
8686
`reset` on your environment, you retrieve a dictionary mapping Brain names to
8787
`BrainInfo` objects. The dictionary contains a `BrainInfo` object for each
88-
non-External Brain set to broadcast as well as for any External Brains.
88+
non-External Brain set to broadcast as well as for any External Brains.
8989

9090
Just like with an External Brain, the `BrainInfo` object contains the fields for
9191
`visual_observations`, `vector_observations`, `text_observations`,
9292
`memories`,`rewards`, `local_done`, `max_reached`, `agents` and
9393
`previous_actions`. Note that `previous_actions` corresponds to the actions that
94-
were taken by the Agents at the previous step, not the current one.
94+
were taken by the Agents at the previous step, not the current one.
9595

9696
Note that when you do a `step` on the environment, you cannot provide actions
9797
for non-External Brains. If there are no External Brains in the scene, simply
98-
call `step()` with no arguments.
98+
call `step()` with no arguments.
9999

100100
You can use the broadcast feature to collect data generated by Player,
101101
Heuristics or Internal Brains game sessions. You can then use this data to train

docs/Learning-Environment-Design-External-Internal-Brains.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ To use a graph model:
4949
a Brain component.)
5050
2. Set the **Brain Type** to **Internal**.
5151
**Note:** In order to see the **Internal** Brain Type option, you must
52-
[enable TensorFlowSharp](Using-TensorFlow-Sharp-in-Unity.md).
52+
[enable TensorFlowSharp](Using-TensorFlow-Sharp-in-Unity.md).
5353
3. Import the `environment_run-id.bytes` file produced by the PPO training
5454
program. (Where `environment_run-id` is the name of the model file, which is
5555
constructed from the name of your Unity environment executable and the run-id

docs/Learning-Environment-Design-Player-Brains.md

Lines changed: 14 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -19,32 +19,20 @@ action per step. In contrast, when a Brain uses the continuous action space you
1919
can send any number of floating point values (up to the **Vector Action Space
2020
Size** setting).
2121

22-
| **Property** | | **Description** |
23-
| :-- |:-- | :-- |
24-
|**Continuous Player Actions**|| The mapping for the continuous vector action
25-
space. Shown when the action space is **Continuous**|.
26-
|| **Size** | The number of key commands defined. You can assign more than one
27-
command to the same action index in order to send different values for that
28-
action. (If you press both keys at the same time, deterministic results are not guaranteed.)|
29-
||**Element 0–N**| The mapping of keys to action values. |
30-
|| **Key** | The key on the keyboard. |
31-
|| **Index** | The element of the Agent's action vector to set when this key is
32-
pressed. The index value cannot exceed the size of the Action Space (minus 1,
33-
since it is an array index).|
34-
|| **Value** | The value to send to the Agent as its action for the specified
35-
index when the mapped key is pressed. All other members of the action vector
36-
are set to 0. |
37-
|**Discrete Player Actions**|| The mapping for the discrete vector action space.
38-
Shown when the action space is **Discrete**.|
39-
|| **Size** | The number of key commands defined. |
40-
||**Element 0–N**| The mapping of keys to action values. |
41-
|| **Key** | The key on the keyboard. |
42-
|| **Branch Index** |The element of the Agent's action vector to set when this
43-
key is pressed. The index value cannot exceed the size of the Action Space
44-
(minus 1, since it is an array index).|
45-
|| **Value** | The value to send to the Agent as its action when the mapped key
46-
is pressed. Cannot exceed the max value for the associated branch (minus 1,
47-
since it is an array index).|
22+
| **Property** | | **Description** |
23+
| :---------------------------- | :--------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
24+
| **Continuous Player Actions** | | The mapping for the continuous vector action space. Shown when the action space is **Continuous**. |
25+
| | **Size** | The number of key commands defined. You can assign more than one command to the same action index in order to send different values for that action. (If you press both keys at the same time, deterministic results are not guaranteed.) |
26+
| | **Element 0–N** | The mapping of keys to action values. |
27+
| | **Key** | The key on the keyboard. |
28+
| | **Index** | The element of the Agent's action vector to set when this key is pressed. The index value cannot exceed the size of the Action Space (minus 1, since it is an array index). |
29+
| | **Value** | The value to send to the Agent as its action for the specified index when the mapped key is pressed. All other members of the action vector are set to 0. |
30+
| **Discrete Player Actions** | | The mapping for the discrete vector action space. Shown when the action space is **Discrete**. |
31+
| | **Size** | The number of key commands defined. |
32+
| | **Element 0–N** | The mapping of keys to action values. |
33+
| | **Key** | The key on the keyboard. |
34+
| | **Branch Index** | The element of the Agent's action vector to set when this key is pressed. The index value cannot exceed the size of the Action Space (minus 1, since it is an array index). |
35+
| | **Value** | The value to send to the Agent as its action when the mapped key is pressed. Cannot exceed the max value for the associated branch (minus 1, since it is an array index). |
4836

4937
For more information about the Unity input system, see
5038
[Input](https://docs.unity3d.com/ScriptReference/Input.html).

docs/Learning-Environment-Design.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ To create a training environment, extend the Academy and Agent classes to
6161
implement the above methods. The `Agent.CollectObservations()` and
6262
`Agent.AgentAction()` functions are required; the other methods are optional —
6363
whether you need to implement them or not depends on your specific scenario.
64-
64+
6565
**Note:** The API used by the Python PPO training process to communicate with
6666
and control the Academy during training can be used for other purposes as well.
6767
For example, you could use the API to use Unity as the simulation engine for
@@ -108,9 +108,9 @@ set in the Unity Editor Inspector. For training, the most important of these
108108
properties is `Max Steps`, which determines how long each training episode
109109
lasts. Once the Academy's step counter reaches this value, it calls the
110110
`AcademyReset()` function to start the next episode.
111-
111+
112112
See [Academy](Learning-Environment-Design-Academy.md) for a complete list of
113-
the Academy properties and their uses.
113+
the Academy properties and their uses.
114114

115115
### Brain
116116

@@ -142,7 +142,7 @@ The Agent class represents an actor in the scene that collects observations and
142142
carries out actions. The Agent class is typically attached to the GameObject in
143143
the scene that otherwise represents the actor — for example, to a player object
144144
in a football game or a car object in a vehicle simulation. Every Agent must be
145-
assigned a Brain.
145+
assigned a Brain.
146146

147147
To create an Agent, extend the Agent class and implement the essential
148148
`CollectObservations()` and `AgentAction()` methods:

0 commit comments

Comments
 (0)