Skip to content

Commit 70fcf6a

Browse files
committed
Another line break removal
1 parent 9d0b992 commit 70fcf6a

22 files changed

+297
-595
lines changed

com.unity.ml-agents/Documentation~/Background-Machine-Learning.md

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -38,17 +38,14 @@ Similar to both unsupervised and supervised learning, reinforcement learning als
3838

3939
One common aspect of all three branches of machine learning is that they all involve a **training phase** and an **inference phase**. While the details of the training and inference phases are different for each of the three, at a high-level, the training phase involves building a model using the provided data, while the inference phase involves applying this model to new, previously unseen, data. More specifically:
4040

41-
- For our unsupervised learning example, the training phase learns the optimal
42-
two clusters based on the data describing existing players, while the inference phase assigns a new player to one of these two clusters.
43-
- For our supervised learning example, the training phase learns the mapping
44-
from player attributes to player label (whether they churned or not), and the inference phase predicts whether a new player will churn or not based on that learned mapping.
45-
- For our reinforcement learning example, the training phase learns the optimal
46-
policy through guided trials, and in the inference phase, the agent observes and takes actions in the wild using its learned policy.
41+
- For our unsupervised learning example, the training phase learns the optimal two clusters based on the data describing existing players, while the inference phase assigns a new player to one of these two clusters.
42+
- For our supervised learning example, the training phase learns the mapping from player attributes to player label (whether they churned or not), and the inference phase predicts whether a new player will churn or not based on that learned mapping.
43+
- For our reinforcement learning example, the training phase learns the optimal policy through guided trials, and in the inference phase, the agent observes and takes actions in the wild using its learned policy.
4744

4845
To briefly summarize: all three classes of algorithms involve training and inference phases in addition to attribute and model selections. What ultimately separates them is the type of data available to learn from. In unsupervised learning our data set was a collection of attributes, in supervised learning our data set was a collection of attribute-label pairs, and, lastly, in reinforcement learning our data set was a collection of observation-action-reward tuples.
4946

5047
## Deep Learning
5148

5249
[Deep learning](https://en.wikipedia.org/wiki/Deep_learning) is a family of algorithms that can be used to address any of the problems introduced above. More specifically, they can be used to solve both attribute and model selection tasks. Deep learning has gained popularity in recent years due to its outstanding performance on several challenging machine learning tasks. One example is [AlphaGo](https://en.wikipedia.org/wiki/AlphaGo), a [computer Go](https://en.wikipedia.org/wiki/Computer_Go) program, that leverages deep learning, that was able to beat Lee Sedol (a Go world champion).
5350

54-
A key characteristic of deep learning algorithms is their ability to learn very complex functions from large amounts of training data. This makes them a natural choice for reinforcement learning tasks when a large amount of data can be generated, say through the use of a simulator or engine such as Unity. By generating hundreds of thousands of simulations of the environment within Unity, we can learn policies for very complex environments (a complex environment is one where the number of observations an agent perceives and the number of actions they can take are large). Many of the algorithms we provide in ML-Agents use some form of deep learning, built on top of the open-source library, [PyTorch](Background-PyTorch.md).
51+
A key characteristic of deep learning algorithms is their ability to learn very complex functions from large amounts of training data. This makes them a natural choice for reinforcement learning tasks when a large amount of data can be generated, say through the use of a simulator or engine such as Unity. By generating hundreds of thousands of simulations of the environment within Unity, we can learn policies for very complex environments (a complex environment is one where the number of observations an agent perceives and the number of actions they can take are large). Many of the algorithms we provide in ML-Agents use some form of deep learning, built on top of the open-source library, [PyTorch](Background-PyTorch.md).

com.unity.ml-agents/Documentation~/Custom-SideChannels.md

Lines changed: 6 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
# Custom Side Channels
22

3-
You can create your own side channel in C# and Python and use it to communicate custom data structures between the two. This can be useful for situations in which the data to be sent is too complex or structured for the built-in
4-
`EnvironmentParameters`, or is not related to any specific agent, and therefore
5-
inappropriate as an agent observation.
3+
You can create your own side channel in C# and Python and use it to communicate custom data structures between the two. This can be useful for situations in which the data to be sent is too complex or structured for the built-in `EnvironmentParameters`, or is not related to any specific agent, and therefore inappropriate as an agent observation.
64

75
## Overview
86

@@ -12,24 +10,19 @@ In order to use a side channel, it must be implemented as both Unity and Python
1210

1311
The side channel will have to implement the `SideChannel` abstract class and the following method.
1412

15-
- `OnMessageReceived(IncomingMessage msg)` : You must implement this method and
16-
read the data from IncomingMessage. The data must be read in the order that it was written.
13+
- `OnMessageReceived(IncomingMessage msg)` : You must implement this method and read the data from IncomingMessage. The data must be read in the order that it was written.
1714

18-
The side channel must also assign a `ChannelId` property in the constructor. The
19-
`ChannelId` is a Guid (or UUID in Python) used to uniquely identify a side
20-
channel. This Guid must be the same on C# and Python. There can only be one side channel of a certain id during communication.
15+
The side channel must also assign a `ChannelId` property in the constructor. The `ChannelId` is a Guid (or UUID in Python) used to uniquely identify a side channel. This Guid must be the same on C# and Python. There can only be one side channel of a certain id during communication.
2116

2217
To send data from C# to Python, create an `OutgoingMessage` instance, add data to it, call the `base.QueueMessageToSend(msg)` method inside the side channel, and call the `OutgoingMessage.Dispose()` method.
2318

24-
To register a side channel on the Unity side, call
25-
`SideChannelManager.RegisterSideChannel` with the side channel as only argument.
19+
To register a side channel on the Unity side, call `SideChannelManager.RegisterSideChannel` with the side channel as only argument.
2620

2721
### Python side
2822

2923
The side channel will have to implement the `SideChannel` abstract class. You must implement :
3024

31-
- `on_message_received(self, msg: "IncomingMessage") -> None` : You must
32-
implement this method and read the data from IncomingMessage. The data must be read in the order that it was written.
25+
- `on_message_received(self, msg: "IncomingMessage") -> None` : You must implement this method and read the data from IncomingMessage. The data must be read in the order that it was written.
3326

3427
The side channel must also assign a `channel_id` property in the constructor. The `channel_id` is a UUID (referred in C# as Guid) used to uniquely identify a side channel. This number must be the same on C# and Python. There can only be one side channel of a certain id during communication.
3528

@@ -191,4 +184,4 @@ for i in range(1000):
191184
env.close()
192185
```
193186

194-
Now, if you run this script and press `Play` the Unity Editor when prompted, the console in the Unity Editor will display a message at every Python step. Additionally, if you press the Space Bar in the Unity Engine, a message will appear in the terminal.
187+
Now, if you run this script and press `Play` the Unity Editor when prompted, the console in the Unity Editor will display a message at every Python step. Additionally, if you press the Space Bar in the Unity Engine, a message will appear in the terminal.

com.unity.ml-agents/Documentation~/FAQ.md

Lines changed: 6 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -31,22 +31,14 @@ UnityAgentsException: The Communicator was unable to connect. Please make sure t
3131
There may be a number of possible causes:
3232

3333
- _Cause_: There may be no agent in the scene
34-
- _Cause_: On OSX, the firewall may be preventing communication with the
35-
environment. _Solution_: Add the built environment binary to the list of exceptions on the firewall by following [instructions](https://support.apple.com/en-us/HT201642).
36-
- _Cause_: An error happened in the Unity Environment preventing communication.
37-
_Solution_: Look into the [log files](https://docs.unity3d.com/Manual/LogFiles.html) generated by the Unity Environment to figure what error happened.
38-
- _Cause_: You have assigned `HTTP_PROXY` and `HTTPS_PROXY` values in your
39-
environment variables. _Solution_: Remove these values and try again.
40-
- _Cause_: You are running in a headless environment (e.g. remotely connected
41-
to a server). _Solution_: Pass `--no-graphics` to `mlagents-learn`, or
42-
`no_graphics=True` to `RemoteRegistryEntry.make()` or the `UnityEnvironment`
43-
initializer. If you need graphics for visual observations, you will need to set up `xvfb` (or equivalent).
34+
- _Cause_: On OSX, the firewall may be preventing communication with the environment. _Solution_: Add the built environment binary to the list of exceptions on the firewall by following [instructions](https://support.apple.com/en-us/HT201642).
35+
- _Cause_: An error happened in the Unity Environment preventing communication. _Solution_: Look into the [log files](https://docs.unity3d.com/Manual/LogFiles.html) generated by the Unity Environment to figure what error happened.
36+
- _Cause_: You have assigned `HTTP_PROXY` and `HTTPS_PROXY` values in your environment variables. _Solution_: Remove these values and try again.
37+
- _Cause_: You are running in a headless environment (e.g. remotely connected to a server). _Solution_: Pass `--no-graphics` to `mlagents-learn`, or `no_graphics=True` to `RemoteRegistryEntry.make()` or the `UnityEnvironment` initializer. If you need graphics for visual observations, you will need to set up `xvfb` (or equivalent).
4438

4539
## Communication port {} still in use
4640

47-
If you receive an exception
48-
`"Couldn't launch new environment because communication port {} is still in use. "`,
49-
you can change the worker number in the Python script when calling
41+
If you receive an exception `"Couldn't launch new environment because communication port {} is still in use. "`, you can change the worker number in the Python script when calling
5042

5143
```python
5244
UnityEnvironment(file_name=filename, worker_id=X)
@@ -58,4 +50,4 @@ If you receive a message `Mean reward : nan` when attempting to train a model us
5850

5951
## "File name" cannot be opened because the developer cannot be verified.
6052

61-
If you have downloaded the repository using the github website on macOS 10.15 (Catalina) or later, you may see this error when attempting to play scenes in the Unity project. Workarounds include installing the package using the Unity Package Manager (this is the officially supported approach - see [here](Installation.md)), or following the instructions [here](https://support.apple.com/en-us/HT202491) to verify the relevant files on your machine on a file-by-file basis.
53+
If you have downloaded the repository using the github website on macOS 10.15 (Catalina) or later, you may see this error when attempting to play scenes in the Unity project. Workarounds include installing the package using the Unity Package Manager (this is the officially supported approach - see [here](Installation.md)), or following the instructions [here](https://support.apple.com/en-us/HT202491) to verify the relevant files on your machine on a file-by-file basis.

com.unity.ml-agents/Documentation~/Get-Started.md

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,11 @@
11
# Get started
22
The ML-Agents Toolkit contains several main components:
33

4-
- Unity package `com.unity.ml-agents` contains the
5-
Unity C# SDK that will be integrated into your Unity project.
4+
- Unity package `com.unity.ml-agents` contains the Unity C# SDK that will be integrated into your Unity project.
65
- Two Python packages:
7-
- `mlagents` contains the machine learning algorithms that
8-
enables you to train behaviors in your Unity scene. Most users of ML-Agents will only need to directly install `mlagents`.
9-
- `mlagents_envs` contains a set of Python APIs to interact with
10-
a Unity scene. It is a foundational layer that facilitates data messaging between Unity scene and the Python machine learning algorithms. Consequently, `mlagents` depends on `mlagents_envs`.
11-
- Unity [Project](https://github.com/Unity-Technologies/ml-agents/tree/main/Project/Assets/ML-Agents/Examples) that contains several
12-
[example environments](Learning-Environment-Examples.md) that highlight the various features of the toolkit to help you get started.
6+
- `mlagents` contains the machine learning algorithms that enables you to train behaviors in your Unity scene. Most users of ML-Agents will only need to directly install `mlagents`.
7+
- `mlagents_envs` contains a set of Python APIs to interact with a Unity scene. It is a foundational layer that facilitates data messaging between Unity scene and the Python machine learning algorithms. Consequently, `mlagents` depends on `mlagents_envs`.
8+
- Unity [Project](https://github.com/Unity-Technologies/ml-agents/tree/main/Project/Assets/ML-Agents/Examples) that contains several [example environments](Learning-Environment-Examples.md) that highlight the various features of the toolkit to help you get started.
139

1410
Use the following topics to get started with ML-Agents.
1511

com.unity.ml-agents/Documentation~/Inference-Engine.md

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,19 +6,15 @@ The ML-Agents Toolkit allows you to use pre-trained neural network models inside
66

77
Inference Engine supports [all Unity runtime platforms](https://docs.unity3d.com/Manual/PlatformSpecific.html).
88

9-
Scripting Backends : Inference Engine is generally faster with
10-
**IL2CPP** than with **Mono** for Standalone builds. In the Editor, It is not
11-
possible to use Inference Engine with GPU device selected when Editor Graphics Emulation is set to **OpenGL(ES) 3.0 or 2.0 emulation**. Also there might be non-fatal build time errors when target platform includes Graphics API that does not support **Unity Compute Shaders**.
9+
Scripting Backends : Inference Engine is generally faster with **IL2CPP** than with **Mono** for Standalone builds. In the Editor, It is not possible to use Inference Engine with GPU device selected when Editor Graphics Emulation is set to **OpenGL(ES) 3.0 or 2.0 emulation**. Also there might be non-fatal build time errors when target platform includes Graphics API that does not support **Unity Compute Shaders**.
1210

1311
In cases when it is not possible to use compute shaders on the target platform, inference can be performed using **CPU** or **GPUPixel** Inference Engine backends.
1412

1513
## Using Inference Engine
1614

17-
When using a model, drag the model file into the **Model** field in the Inspector of the Agent. Select the **Inference Device**: **Compute Shader**, **Burst** or
18-
**Pixel Shader** you want to use for inference.
15+
When using a model, drag the model file into the **Model** field in the Inspector of the Agent. Select the **Inference Device**: **Compute Shader**, **Burst** or **Pixel Shader** you want to use for inference.
1916

20-
**Note:** For most of the models generated with the ML-Agents Toolkit, CPU inference (**Burst**) will
21-
be faster than GPU inference (**Compute Shader** or **Pixel Shader**). You should use GPU inference only if you use the ResNet visual encoder or have a large number of agents with visual observations.
17+
**Note:** For most of the models generated with the ML-Agents Toolkit, CPU inference (**Burst**) will be faster than GPU inference (**Compute Shader** or **Pixel Shader**). You should use GPU inference only if you use the ResNet visual encoder or have a large number of agents with visual observations.
2218

2319
# Unsupported use cases
2420
## Externally trained models
@@ -27,4 +23,4 @@ The ML-Agents Toolkit only supports the models created with our trainers. Model
2723
If you wish to run inference on an externally trained model, you should use Inference Engine directly, instead of trying to run it through ML-Agents.
2824

2925
## Model inference outside of Unity
30-
We do not provide support for inference anywhere outside of Unity. The `.onnx` files produced by training use the open format ONNX; if you wish to convert a `.onnx` file to another format or run inference with them, refer to their documentation.
26+
We do not provide support for inference anywhere outside of Unity. The `.onnx` files produced by training use the open format ONNX; if you wish to convert a `.onnx` file to another format or run inference with them, refer to their documentation.

com.unity.ml-agents/Documentation~/InputSystem-Integration.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@ Take a look at how we have implemented the C# code in the example Input Integrat
77
## Getting Started with Input System Integration
88
1. Add the `com.unity.inputsystem` version 1.1.0-preview.3 or later to your project via the Package Manager window.
99
2. If you have already setup an InputActionAsset skip to Step 3, otherwise follow these sub steps:
10-
1. Create an InputActionAsset to allow your Agent to be controlled by the Input System.
11-
2. Handle the events from the Input System where you normally would (i.e. a script external to your Agent class).
12-
3. Add the InputSystemActuatorComponent to the GameObject that has the `PlayerInput` and `Agent` components attached.
10+
3. Create an InputActionAsset to allow your Agent to be controlled by the Input System.
11+
4. Handle the events from the Input System where you normally would (i.e. a script external to your Agent class).
12+
5. Add the InputSystemActuatorComponent to the GameObject that has the `PlayerInput` and `Agent` components attached.
1313

1414
Additionally, see below for additional technical specifications on the C# code for the InputActuatorComponent.
1515
## Technical Specifications
@@ -22,8 +22,7 @@ that if multiple `Components` on your `GameObject` need to access an `InputActio
2222

2323
### `InputActuatorComponent` Class
2424
The `InputActuatorComponent` is the bridge between ML-Agents and the Input System. It allows ML-Agents to:
25-
* create an `ActionSpec` for your Agent based on an `InputActionAsset` that comes from an
26-
`IInputActionAssetProvider`.
25+
* create an `ActionSpec` for your Agent based on an `InputActionAsset` that comes from an `IInputActionAssetProvider`.
2726
* send simulated input from a training process or a neural network
2827
* let developers keep their input handling code in one place
2928

0 commit comments

Comments
 (0)