Skip to content

Commit 63373b5

Browse files
shihzyawjuliani
authored andcommitted
Additional typo, grammar, and vocabulary fixes (#197)
* Fixed typo and grammer * Update Agents-Editor-Interface.md * Fixed typo * Fixed vocabulary * Fixed typo * Fixed typo
1 parent d09085d commit 63373b5

File tree

4 files changed

+6
-6
lines changed

4 files changed

+6
-6
lines changed

docs/Agents-Editor-Interface.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ values (in _Discrete_ action space).
3232
* `Action Descriptions` - A list of strings used to name the available actions for the Brain.
3333
* `State Space Type` - Corresponds to whether state vector contains a single integer (Discrete) or a series of real-valued floats (Continuous).
3434
* `Action Space Type` - Corresponds to whether action vector contains a single integer (Discrete) or a series of real-valued floats (Continuous).
35-
* `Type of Brain` - Describes how Brain will decide actions.
35+
* `Type of Brain` - Describes how the Brain will decide actions.
3636
* `External` - Actions are decided using Python API.
3737
* `Internal` - Actions are decided using internal TensorflowSharp model.
3838
* `Player` - Actions are decided using Player input mappings.

docs/Organizing-the-Scene.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ The Academy is responsible for:
2727
* Coordinating the Brains which must be set as children of the Academy.
2828

2929
#### Brains
30-
Each brain corresponds to a specific Decision-making method. This often aligns with a specific neural network model. A Brains is responsible for deciding the action of all the Agents which are linked to it. There can be multiple brains in the same scene and multiple agents can subscribe to the same brain.
30+
Each brain corresponds to a specific Decision-making method. This often aligns with a specific neural network model. The brain is responsible for deciding the action of all the Agents which are linked to it. There can be multiple brains in the same scene and multiple agents can subscribe to the same brain.
3131

3232
#### Agents
3333
Each agent within a scene takes actions according to the decisions provided by it's linked Brain. There can be as many Agents of as many types as you like in the scene. The state size and action size of each agent must match the brain's parameters in order for the Brain to decide actions for it.

docs/Unity-Agents-Overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
![diagram](../images/agents_diagram.png)
44

5-
A visual depiction of how an Learning Environment might be configured within ML-Agents.
5+
A visual depiction of how a Learning Environment might be configured within ML-Agents.
66

77
The three main kinds of objects within any Agents Learning Environment are:
88

docs/best-practices.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
# Environment Design Best Practices
22

33
## General
4-
* It is often helpful to being with the simplest version of the problem, to ensure the agent can learn it. From there increase
4+
* It is often helpful to start with the simplest version of the problem, to ensure the agent can learn it. From there increase
55
complexity over time. This can either be done manually, or via Curriculum Learning, where a set of lessons which progressively increase in difficulty are presented to the agent ([learn more here](../docs/curriculum.md)).
6-
* When possible, It is often helpful to ensure that you can complete the task by using a Player Brain to control the agent.
6+
* When possible, it is often helpful to ensure that you can complete the task by using a Player Brain to control the agent.
77

88
## Rewards
99
* The magnitude of any given reward should typically not be greater than 1.0 in order to ensure a more stable learning process.
1010
* Positive rewards are often more helpful to shaping the desired behavior of an agent than negative rewards.
1111
* For locomotion tasks, a small positive reward (+0.1) for forward velocity is typically used.
12-
* If you want the agent the finish a task quickly, it is often helpful to provide a small penalty every step (-0.05) that the agent does not complete the task. In this case completion of the task should also coincide with the end of the episode.
12+
* If you want the agent to finish a task quickly, it is often helpful to provide a small penalty every step (-0.05) that the agent does not complete the task. In this case completion of the task should also coincide with the end of the episode.
1313
* Overly-large negative rewards can cause undesirable behavior where an agent learns to avoid any behavior which might produce the negative reward, even if it is also behavior which can eventually lead to a positive reward.
1414

1515
## States

0 commit comments

Comments
 (0)