Skip to content

Commit 906c372

Browse files
author
Marwan Mattar
authored
Replaced message printed in Python and in documentation. (#881)
1 parent 2520a79 commit 906c372

File tree

7 files changed

+7
-7
lines changed

7 files changed

+7
-7
lines changed

docs/Basic-Guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ object.
7373
Where:
7474
- `<run-identifier>` is a string used to separate the results of different training runs
7575
- And the `--train` tells learn.py to run a training session (rather than inference)
76-
5. When the message _"Ready to connect with the Editor"_ is displayed on the screen, you can press the :arrow_forward: button in Unity to start training in the Editor.
76+
5. When the message _"Start training by pressing the Play button in the Unity Editor"_ is displayed on the screen, you can press the :arrow_forward: button in Unity to start training in the Editor.
7777

7878
**Note**: Alternatively, you can use an executable rather than the Editor to perform training. Please refer to [this page](Learning-Environment-Executable.md) for instructions on how to build and use an executable.
7979

docs/Getting-Started-with-Balance-Ball.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -213,7 +213,7 @@ To summarize, go to your command line, enter the `ml-agents/python` directory an
213213
python3 learn.py --run-id=<run-identifier> --train
214214
```
215215

216-
When the message _"Ready to connect with the Editor"_ is displayed on the screen, you can press the :arrow_forward: button in Unity to start training in the Editor.
216+
When the message _"Start training by pressing the Play button in the Unity Editor"_ is displayed on the screen, you can press the :arrow_forward: button in Unity to start training in the Editor.
217217

218218
**Note**: If you're using Anaconda, don't forget to activate the ml-agents environment first.
219219

docs/Python-API.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ env = UnityEnvironment(file_name="3DBall", worker_id=0, seed=1)
2929
* `worker_id` indicates which port to use for communication with the environment. For use in parallel training regimes such as A3C.
3030
* `seed` indicates the seed to use when generating random numbers during the training process. In environments which do not involve physics calculations, setting the seed enables reproducible experimentation by ensuring that the environment and trainers utilize the same random seed.
3131

32-
If you want to directly interact with the Editor, you need to use `file_name=None`, then press the :arrow_forward: button in the Editor when the message _"Ready to connect with the Editor"_ is displayed on the screen
32+
If you want to directly interact with the Editor, you need to use `file_name=None`, then press the :arrow_forward: button in the Editor when the message _"Start training by pressing the Play button in the Unity Editor"_ is displayed on the screen
3333

3434
## Interacting with a Unity Environment
3535

docs/Training-Imitation-Learning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ There are a variety of possible imitation learning algorithms which can be used,
1111
3. Set the "Student" brain to External mode.
1212
4. Link the brains to the desired agents (one agent as the teacher and at least one agent as a student).
1313
5. In `trainer_config.yaml`, add an entry for the "Student" brain. Set the `trainer` parameter of this entry to `imitation`, and the `brain_to_imitate` parameter to the name of the teacher brain: "Teacher". Additionally, set `batches_per_epoch`, which controls how much training to do each moment. Increase the `max_steps` option if you'd like to keep training the agents for a longer period of time.
14-
6. Launch the training process with `python3 python/learn.py --train --slow`, and press the :arrow_forward: button in Unity when the message _"Ready to connect with the Editor"_ is displayed on the screen
14+
6. Launch the training process with `python3 python/learn.py --train --slow`, and press the :arrow_forward: button in Unity when the message _"Start training by pressing the Play button in the Unity Editor"_ is displayed on the screen
1515
7. From the Unity window, control the agent with the Teacher brain by providing "teacher demonstrations" of the behavior you would like to see.
1616
8. Watch as the agent(s) with the student brain attached begin to behave similarly to the demonstrations.
1717
9. Once the Student agents are exhibiting the desired behavior, end the training process with `CTL+C` from the command line.

docs/Training-ML-Agents.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ The basic command for training is:
1919
python3 learn.py <env_name> --run-id=<run-identifier> --train
2020

2121
where
22-
* `<env_name>`__(Optional)__ is the name (including path) of your Unity executable containing the agents to be trained. If `<env_name>` is not passed, the training will happen in the Editor. Press the :arrow_forward: button in Unity when the message _"Ready to connect with the Editor"_ is displayed on the screen.
22+
* `<env_name>`__(Optional)__ is the name (including path) of your Unity executable containing the agents to be trained. If `<env_name>` is not passed, the training will happen in the Editor. Press the :arrow_forward: button in Unity when the message _"Start training by pressing the Play button in the Unity Editor"_ is displayed on the screen.
2323
* `<run-identifier>` is an optional identifier you can use to identify the results of individual training runs.
2424

2525
For example, suppose you have a project in Unity named "CatsOnBicycles" which contains agents ready to train. To perform the training:

docs/Using-Docker.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ docker run --name <container-name> \
6262
Notes on argument values:
6363
- `<container-name>` is used to identify the container (in case you want to interrupt and terminate it). This is optional and Docker will generate a random name if this is not set. _Note that this must be unique for every run of a Docker image._
6464
- `<image-name>` references the image name used when building the container.
65-
- `<environemnt-name>` __(Optional)__: If you are training with a linux executable, this is the name of the executable. If you are training in the Editor, do not pass a `<environemnt-name>` argument and press the :arrow_forward: button in Unity when the message _"Ready to connect with the Editor"_ is displayed on the screen.
65+
- `<environemnt-name>` __(Optional)__: If you are training with a linux executable, this is the name of the executable. If you are training in the Editor, do not pass a `<environemnt-name>` argument and press the :arrow_forward: button in Unity when the message _"Start training by pressing the Play button in the Unity Editor"_ is displayed on the screen.
6666
- `source`: Reference to the path in your host OS where you will store the Unity executable.
6767
- `target`: Tells Docker to mount the `source` path as a disk with this name.
6868
- `docker-target-name`: Tells the ML-Agents Python package what the name of the disk where it can read the Unity executable and store the graph. **This should therefore be identical to `target`.**

python/unityagents/environment.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ def __init__(self, file_name=None, worker_id=0,
5454
if file_name is not None:
5555
self.executable_launcher(file_name, docker_training, no_graphics)
5656
else:
57-
logger.info("Ready to connect with the Editor.")
57+
logger.info("Start training by pressing the Play button in the Unity Editor.")
5858
self._loaded = True
5959

6060
rl_init_parameters_in = UnityRLInitializationInput(

0 commit comments

Comments
 (0)