Skip to content

Commit fb1bb27

Browse files
committed
Convert relative path to github url
1 parent 6190d41 commit fb1bb27

15 files changed

+55
-22
lines changed

com.unity.ml-agents/Documentation~/CODE_OF_CONDUCT.md

Lines changed: 0 additions & 1 deletion
This file was deleted.
Lines changed: 36 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,36 @@
1-
{!../../CONTRIBUTING.md!}
1+
# How to Contribute to ML-Agents
2+
3+
## 1.Fork the repository
4+
Fork the ML-Agents repository by clicking on the "Fork" button in the top right corner of the GitHub page. This creates a copy of the repository under your GitHub account.
5+
6+
## 2. Set up your development environment
7+
Clone the forked repository to your local machine using Git. Install the necessary dependencies and follow the instructions provided in the project's documentation to set up your development environment properly.
8+
9+
## 3. Choose an issue or feature
10+
Browse the project's issue tracker or discussions to find an open issue or feature that you would like to contribute to. Read the guidelines and comments associated with the issue to understand the requirements and constraints.
11+
12+
## 4. Make your changes
13+
Create a new branch for your changes based on the main branch of the ML-Agents repository. Implement your code changes or add new features as necessary. Ensure that your code follows the project's coding style and conventions.
14+
15+
* Example: Let's say you want to add support for a new type of reward function in the ML-Agents framework. You can create a new branch named feature/reward-function to implement this feature.
16+
17+
## 5. Test your changes
18+
Run the appropriate tests to ensure your changes work as intended. If necessary, add new tests to cover your code and verify that it doesn't introduce regressions.
19+
20+
* Example: For the reward function feature, you would write tests to check different scenarios and expected outcomes of the new reward function.
21+
22+
## 6. Submit a pull request
23+
Push your branch to your forked repository and submit a pull request (PR) to the ML-Agents main repository. Provide a clear and concise description of your changes, explaining the problem you solved or the feature you added.
24+
25+
* Example: In the pull request description, you would explain how the new reward function works, its benefits, and any relevant implementation details.
26+
27+
## 7. Respond to feedback
28+
Be responsive to any feedback or comments provided by the project maintainers. Address the feedback by making necessary revisions to your code and continue the discussion if required.
29+
30+
## 8. Continuous integration and code review
31+
The ML-Agents project utilizes automated continuous integration (CI) systems to run tests on pull requests. Address any issues flagged by the CI system and actively participate in the code review process by addressing comments from reviewers.
32+
33+
## 9. Merge your changes
34+
Once your pull request has been approved and meets all the project's requirements, a project maintainer will merge your changes into the main repository. Congratulations, your contribution has been successfully integrated!
35+
36+
**Remember to always adhere to the project's code of conduct, be respectful, and follow any specific contribution guidelines provided by the ML-Agents project. Happy contributing!**

com.unity.ml-agents/Documentation~/Installation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -151,8 +151,8 @@ conda install "grpcio=1.48.2" -c conda-forge
151151
Note, you need to have the matching version of
152152
the Unity packages with the particular release of the python packages. You can find the release history [here](https://github.com/Unity-Technologies/ml-agents/releases)
153153

154-
By installing the `mlagents` package, the dependencies listed in the
155-
[setup.py file](../../ml-agents/setup.py) are also installed. These include
154+
When you install the `mlagents` package, the dependencies listed in the
155+
[setup.py file](https://github.com/Unity-Technologies/ml-agents/blob/release_22/ml-agents/setup.py) are also installed. These include
156156
[PyTorch](Background-PyTorch.md).
157157

158158
If you intend to make modifications to `mlagents` or `mlagents_envs`, you should

com.unity.ml-agents/Documentation~/LICENSE.md

Lines changed: 0 additions & 1 deletion
This file was deleted.

com.unity.ml-agents/Documentation~/Learning-Environment-Design.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ for curriculum learning and environment parameter randomization for details.
118118
We recommend modifying the environment from the Agent's `OnEpisodeBegin()`
119119
function by leveraging `Academy.Instance.EnvironmentParameters`. See the
120120
WallJump example environment for a sample usage (specifically,
121-
[WallJumpAgent.cs](../../Project/Assets/ML-Agents/Examples/WallJump/Scripts/WallJumpAgent.cs)
121+
[WallJumpAgent.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_22/Project/Assets/ML-Agents/Examples/WallJump/Scripts/WallJumpAgent.cs)
122122
).
123123

124124
## Agent
@@ -158,5 +158,5 @@ environments. These statistics are aggregated and generated during the training
158158
process. To record statistics, see the `StatsRecorder` C# class.
159159

160160
See the FoodCollector example environment for a sample usage (specifically,
161-
[FoodCollectorSettings.cs](../../Project/Assets/ML-Agents/Examples/FoodCollector/Scripts/FoodCollectorSettings.cs)
161+
[FoodCollectorSettings.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_22/Project/Assets/ML-Agents/Examples/FoodCollector/Scripts/FoodCollectorSettings.cs)
162162
).

com.unity.ml-agents/Documentation~/Limitations.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,5 +3,5 @@
33
See the package-specific Limitations pages:
44

55
- [`com.unity.mlagents` Unity package](Package-Limitations.md)
6-
- [`mlagents` Python package](../../ml-agents/README.md#limitations)
7-
- [`mlagents_envs` Python package](../../ml-agents-envs/README.md#limitations)
6+
- [`mlagents` Python package](https://github.com/Unity-Technologies/ml-agents/blob/release_22/ml-agents/README.md#limitations)
7+
- [`mlagents_envs` Python package](https://github.com/Unity-Technologies/ml-agents/blob/release_22/ml-agents-envs/README.md#limitations)

com.unity.ml-agents/Documentation~/Python-Custom-Trainer-Plugin.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ information on how python plugin system works see [Plugin interfaces](Training-P
1010
Model-free RL algorithms generally fall into two broad categories: on-policy and off-policy. On-policy algorithms perform updates based on data gathered from the current policy. Off-policy algorithms learn a Q function from a buffer of previous data, then use this Q function to make decisions. Off-policy algorithms have three key benefits in the context of ML-Agents: They tend to use fewer samples than on-policy as they can pull and re-use data from the buffer many times. They allow player demonstrations to be inserted in-line with RL data into the buffer, enabling new ways of doing imitation learning by streaming player data.
1111

1212
To add new custom trainers to ML-agents, you would need to create a new python package.
13-
To give you an idea of how to structure your package, we have created a [mlagents_trainer_plugin](../ml-agents-trainer-plugin) package ourselves as an
13+
To give you an idea of how to structure your package, we have created a [mlagents_trainer_plugin](https://github.com/Unity-Technologies/ml-agents/tree/release_22/ml-agents-trainer-plugin) package ourselves as an
1414
example, with implementation of `A2c` and `DQN` algorithms. You would need a `setup.py` file to list extra requirements and
1515
register the new RL algorithm in ml-agents ecosystem and be able to call `mlagents-learn` CLI with your customized
1616
configuration.

com.unity.ml-agents/Documentation~/Python-Gym-API.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Unity environment via Python.
1212
## Installation
1313

1414
The gym wrapper is part of the `mlagents_envs` package. Please refer to the
15-
[mlagents_envs installation instructions](../../ml-agents-envs/README.md).
15+
[mlagents_envs installation instructions](https://github.com/Unity-Technologies/ml-agents/blob/release_22/ml-agents-envs/README.md).
1616

1717

1818
## Using the Gym Wrapper

com.unity.ml-agents/Documentation~/Python-LLAPI.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ The key objects in the Python API include:
3939
DecisionSteps and TerminalSteps as well as the expected action shapes.
4040

4141
These classes are all defined in the
42-
[base_env](../../ml-agents-envs/mlagents_envs/base_env.py) script.
42+
[base_env](https://github.com/Unity-Technologies/ml-agents/blob/release_22/ml-agents-envs/mlagents_envs/base_env.py) script.
4343

4444
An Agent "Behavior" is a group of Agents identified by a `BehaviorName` that
4545
share the same observations and action types (described in their
@@ -59,7 +59,7 @@ release._
5959
## Loading a Unity Environment
6060

6161
Python-side communication happens through `UnityEnvironment` which is located in
62-
[`environment.py`](../ml-agents-envs/mlagents_envs/environment.py). To load a
62+
[`environment.py`](https://github.com/Unity-Technologies/ml-agents/blob/release_22/ml-agents-envs/mlagents_envs/environment.py). To load a
6363
Unity environment from a built binary file, put the file in the same directory
6464
as `envs`. For example, if the filename of your Unity environment is `3DBall`,
6565
in python, run:

com.unity.ml-agents/Documentation~/Python-PettingZoo-API.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ interfacing with a Unity environment via Python.
88
## Installation and Examples
99

1010
The PettingZoo wrapper is part of the `mlagents_envs` package. Please refer to the
11-
[mlagents_envs installation instructions](../../ml-agents-envs/README.md).
11+
[mlagents_envs installation instructions](https://github.com/Unity-Technologies/ml-agents/blob/release_22/ml-agents-envs/README.md).
1212

1313
[[Colab] PettingZoo Wrapper Example](https://colab.research.google.com/github/Unity-Technologies/ml-agents/blob/develop-python-api-ga/ml-agents-envs/colabs/Colab_PettingZoo.ipynb)
1414

0 commit comments

Comments
 (0)