Skip to content
Closed
Show file tree
Hide file tree
Changes from 13 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion colab/Colab_UnityEnvironment_1_Run.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
},
"source": [
"# ML-Agents Open a UnityEnvironment\n",
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions colab/Colab_UnityEnvironment_2_Train.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
},
"source": [
"# ML-Agents Q-Learning with GridWorld\n",
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/gridworld.png?raw=true\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/images/gridworld.png?raw=true\" align=\"middle\" width=\"435\"/>"
]
},
{
Expand Down Expand Up @@ -190,7 +190,7 @@
"id": "pZhVRfdoyPmv"
},
"source": [
"The [GridWorld](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Examples.md#gridworld) Environment is a simple Unity visual environment. The Agent is a blue square in a 3x3 grid that is trying to reach a green __`+`__ while avoiding a red __`x`__.\n",
"The [GridWorld](https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/Learning-Environment-Examples.md#gridworld) Environment is a simple Unity visual environment. The Agent is a blue square in a 3x3 grid that is trying to reach a green __`+`__ while avoiding a red __`x`__.\n",
"\n",
"The observation is an image obtained by a camera on top of the grid.\n",
"\n",
Expand Down
10 changes: 5 additions & 5 deletions colab/Colab_UnityEnvironment_3_SideChannel.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
},
"source": [
"# ML-Agents Use SideChannels\n",
"<img src=\"https://raw.githubusercontent.com/Unity-Technologies/ml-agents/release_22_docs/docs/images/3dball_big.png\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://raw.githubusercontent.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/images/3dball_big.png\" align=\"middle\" width=\"435\"/>"
]
},
{
Expand Down Expand Up @@ -176,7 +176,7 @@
"## Side Channel\n",
"\n",
"SideChannels are objects that can be passed to the constructor of a UnityEnvironment or the `make()` method of a registry entry to send non Reinforcement Learning related data.\n",
"More information available [here](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Python-API.md#communicating-additional-information-with-the-environment)\n",
"More information available [here](https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/Python-LLAPI.md#communicating-additional-information-with-the-environment)\n",
"\n",
"\n",
"\n"
Expand All @@ -189,7 +189,7 @@
},
"source": [
"### Engine Configuration SideChannel\n",
"The [Engine Configuration Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Python-API.md#engineconfigurationchannel) is used to configure how the Unity Engine should run.\n",
"The [Engine Configuration Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/Python-LLAPI.md#engineconfigurationchannel) is used to configure how the Unity Engine should run.\n",
"We will use the GridWorld environment to demonstrate how to use the EngineConfigurationChannel."
]
},
Expand Down Expand Up @@ -282,7 +282,7 @@
},
"source": [
"### Environment Parameters Channel\n",
"The [Environment Parameters Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Python-API.md#environmentparameters) is used to modify environment parameters during the simulation.\n",
"The [Environment Parameters Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/Python-LLAPI.md#environmentparameters) is used to modify environment parameters during the simulation.\n",
"We will use the GridWorld environment to demonstrate how to use the EngineConfigurationChannel."
]
},
Expand Down Expand Up @@ -419,7 +419,7 @@
},
"source": [
"### Creating your own Side Channels\n",
"You can send various kinds of data between a Unity Environment and Python but you will need to [create your own implementation of a Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Custom-SideChannels.md#custom-side-channels) for advanced use cases.\n"
"You can send various kinds of data between a Unity Environment and Python but you will need to [create your own implementation of a Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/Custom-SideChannels.md#custom-side-channels) for advanced use cases.\n"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion colab/Colab_UnityEnvironment_4_SB3VectorEnv.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
},
"source": [
"# ML-Agents run with Stable Baselines 3\n",
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
]
},
{
Expand Down
8 changes: 4 additions & 4 deletions com.unity.ml-agents/Runtime/Academy.cs
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@
* manages the communication between the learning environment and the Python
* API. For more information on each of these entities, in addition to how to
* set-up a learning environment and train the behavior of characters in a
* Unity scene, please browse our documentation pages on GitHub:
* https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/docs/
* Unity scene, please browse our documentation pages:
* https://docs.unity3d.com/Packages/com.unity.ml-agents@latest
*/

namespace Unity.MLAgents
Expand Down Expand Up @@ -61,8 +61,8 @@ void FixedUpdate()
/// fall back to inference or heuristic decisions. (You can also set agents to always use
/// inference or heuristics.)
/// </remarks>
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/" +
"docs/Learning-Environment-Design.md")]
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/" +
"Documentation~/Learning-Environment-Design.md")]
public class Academy : IDisposable
{
/// <summary>
Expand Down
2 changes: 1 addition & 1 deletion com.unity.ml-agents/Runtime/Actuators/IActionReceiver.cs
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ public interface IActionReceiver
///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~//Learning-Environment-Design-Agents.md
/// </remarks>
/// <seealso cref="IActionReceiver.OnActionReceived"/>
void WriteDiscreteActionMask(IDiscreteActionMask actionMask);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ public interface IDiscreteActionMask
///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#masking-discrete-actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/Learning-Environment-Design-Agents.md#masking-discrete-actions
/// </remarks>
/// <param name="branch">The branch for which the actions will be masked.</param>
/// <param name="actionIndex">Index of the action.</param>
Expand Down
28 changes: 14 additions & 14 deletions com.unity.ml-agents/Runtime/Agent.cs
Original file line number Diff line number Diff line change
Expand Up @@ -192,14 +192,14 @@ public override BuiltInActuatorType GetBuiltInActuatorType()
/// [OnDisable()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnDisable.html]
/// [OnBeforeSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnBeforeSerialize.html
/// [OnAfterSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnAfterSerialize.html
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design.md
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/Learning-Environment-Design-Agents.md
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/Learning-Environment-Design.md
/// [Unity ML-Agents Toolkit]: https://github.com/Unity-Technologies/ml-agents
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Readme.md
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/index.md
///
/// </remarks>
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/" +
"docs/Learning-Environment-Design-Agents.md")]
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/" +
"Documentation~/Learning-Environment-Design-Agents.md")]
[Serializable]
[RequireComponent(typeof(BehaviorParameters))]
[DefaultExecutionOrder(-50)]
Expand Down Expand Up @@ -728,8 +728,8 @@ public int CompletedEpisodes
/// for information about mixing reward signals from curiosity and Generative Adversarial
/// Imitation Learning (GAIL) with rewards supplied through this method.
///
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob//Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// </remarks>
/// <param name="reward">The new value of the reward.</param>
public void SetReward(float reward)
Expand All @@ -756,8 +756,8 @@ public void SetReward(float reward)
/// for information about mixing reward signals from curiosity and Generative Adversarial
/// Imitation Learning (GAIL) with rewards supplied through this method.
///
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/ML-Agents-Overview.md#a-quick-note-on-reward-signals
///</remarks>
/// <param name="increment">Incremental reward value.</param>
public void AddReward(float increment)
Expand Down Expand Up @@ -945,8 +945,8 @@ public virtual void Initialize() { }
/// implementing a simple heuristic function can aid in debugging agent actions and interactions
/// with its environment.
///
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/Learning-Environment-Design-Agents.md#actions
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// </remarks>
/// <example>
Expand Down Expand Up @@ -1203,7 +1203,7 @@ void ResetSensors()
/// For more information about observations, see [Observations and Sensors].
///
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/Learning-Environment-Design-Agents.md#observations-and-sensors
/// </remarks>
public virtual void CollectObservations(VectorSensor sensor)
{
Expand Down Expand Up @@ -1245,7 +1245,7 @@ public ReadOnlyCollection<float> GetStackedObservations()
///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <seealso cref="IActionReceiver.OnActionReceived"/>
public virtual void WriteDiscreteActionMask(IDiscreteActionMask actionMask) { }
Expand Down Expand Up @@ -1312,7 +1312,7 @@ public virtual void WriteDiscreteActionMask(IDiscreteActionMask actionMask) { }
///
/// For more information about implementing agent actions see [Agents - Actions].
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/Learning-Environment-Design-Agents.md#actions
/// </para>
/// </remarks>
/// <param name="actions">
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ namespace Unity.MLAgents.Demonstrations
/// See [Imitation Learning - Recording Demonstrations] for more information.
///
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/develop/com.unity.ml-agents/Documentation~/Learning-Environment-Design-Agents.md#recording-demonstrations
/// </remarks>
[RequireComponent(typeof(Agent))]
[AddComponentMenu("ML Agents/Demonstration Recorder", (int)MenuGroup.Default)]
Expand Down
Loading