Skip to content

Commit

Permalink
Release 22 mm (#6157)
Browse files Browse the repository at this point in the history
* adding wrench

* correct build path

* release branch and 6.0 target

* XmlDoc update

* adressing xml docs

* more docs

* updating the release

* test xmldoc fixes

* more xml doc fixes

* Uncompress the 3DBall sample

* Fix API documentation

* more xml doc fixes

* Revert "Uncompress the 3DBall sample"

This reverts commit d67dc94.

* reformat MaxStep xml

* more xml doc fixes

* fix more xml doc issues

* fix summary tag

* Updated changelog for missing PRs.

* Removed tabs from .tests.json.

* Updated changelog.

* Removed tabs from CHANGELOG.

* Fix failing ci post upgrade (#6141) (#6145)

* Update PerformancProject and DevProject.

* Removed mac perf tests.

* Removing standalone tests dep from wrench packaging.

* Fixed package works issues. Updated com.unity.ml-agents.md.

* Updated com.unity.ml-agents.md.

* Updated package version in Academy.cs

* Adding back in package pack deps.

* Updated package pack testing deps..

* Regenerated wrench ymls.

* License update.

* Extensions License update.

* Another license tweak.

* Another license tweak.

* Upgraded to sentis 2.1.0.

* Updated standalone yamato build test to using new ml-agents ubuntu ci bokken image.

* Bumped python and extensions package versions.

* Changed ci image for pytest gpu yamato test.

* Changed default cuda dtype to torch.float32.

* Updated version validation and extensions version.

* Fixed failing GPU test.

* Fixed failing GPU test.

* Updated readme table and make_readme_table.py

* Updated publish to pypi gha.

---------

Co-authored-by: alexandre-ribard <[email protected]>
Co-authored-by: Aurimas Petrovas <>
  • Loading branch information
miguelalonsojr and AlexRibard authored Oct 5, 2024
1 parent 8760552 commit ac576f9
Show file tree
Hide file tree
Showing 25 changed files with 64 additions and 63 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/publish_pypi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ jobs:
python setup.py bdist_wheel
- name: Publish distribution 📦 to Test PyPI
if: startsWith(github.ref, 'refs/tags') && contains(github.ref, 'test')
uses: pypa/gh-action-pypi-publish@master
uses: pypa/gh-action-pypi-publish@release/v1
with:
password: ${{ secrets.TEST_PYPI_PASSWORD }}
repository_url: https://test.pypi.org/legacy/
Expand Down
2 changes: 1 addition & 1 deletion .yamato/pytest-gpu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ pytest_gpu:
name: Pytest GPU
agent:
type: Unity::VM::GPU
image: ml-agents/ml-agents-ubuntu-18.04:latest
image: ml-agents/ubuntu-ci:v1.0.0
flavor: b1.large
commands:
- |
Expand Down
4 changes: 2 additions & 2 deletions colab/Colab_UnityEnvironment_1_Run.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
},
"source": [
"# ML-Agents Open a UnityEnvironment\n",
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
]
},
{
Expand Down Expand Up @@ -149,7 +149,7 @@
" import mlagents\n",
" print(\"ml-agents already installed\")\n",
"except ImportError:\n",
" !python -m pip install -q mlagents==1.0.0\n",
" !python -m pip install -q mlagents==1.1.0\n",
" print(\"Installed ml-agents\")"
],
"execution_count": 1,
Expand Down
6 changes: 3 additions & 3 deletions colab/Colab_UnityEnvironment_2_Train.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
},
"source": [
"# ML-Agents Q-Learning with GridWorld\n",
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/images/gridworld.png?raw=true\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/gridworld.png?raw=true\" align=\"middle\" width=\"435\"/>"
]
},
{
Expand Down Expand Up @@ -152,7 +152,7 @@
" import mlagents\n",
" print(\"ml-agents already installed\")\n",
"except ImportError:\n",
" !python -m pip install -q mlagents==1.0.0\n",
" !python -m pip install -q mlagents==1.1.0\n",
" print(\"Installed ml-agents\")"
],
"execution_count": 2,
Expand Down Expand Up @@ -190,7 +190,7 @@
"id": "pZhVRfdoyPmv"
},
"source": [
"The [GridWorld](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Examples.md#gridworld) Environment is a simple Unity visual environment. The Agent is a blue square in a 3x3 grid that is trying to reach a green __`+`__ while avoiding a red __`x`__.\n",
"The [GridWorld](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Examples.md#gridworld) Environment is a simple Unity visual environment. The Agent is a blue square in a 3x3 grid that is trying to reach a green __`+`__ while avoiding a red __`x`__.\n",
"\n",
"The observation is an image obtained by a camera on top of the grid.\n",
"\n",
Expand Down
12 changes: 6 additions & 6 deletions colab/Colab_UnityEnvironment_3_SideChannel.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
},
"source": [
"# ML-Agents Use SideChannels\n",
"<img src=\"https://raw.githubusercontent.com/Unity-Technologies/ml-agents/release_21_docs/docs/images/3dball_big.png\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://raw.githubusercontent.com/Unity-Technologies/ml-agents/release_22_docs/docs/images/3dball_big.png\" align=\"middle\" width=\"435\"/>"
]
},
{
Expand Down Expand Up @@ -153,7 +153,7 @@
" import mlagents\n",
" print(\"ml-agents already installed\")\n",
"except ImportError:\n",
" !python -m pip install -q mlagents==1.0.0\n",
" !python -m pip install -q mlagents==1.1.0\n",
" print(\"Installed ml-agents\")"
],
"execution_count": 2,
Expand All @@ -176,7 +176,7 @@
"## Side Channel\n",
"\n",
"SideChannels are objects that can be passed to the constructor of a UnityEnvironment or the `make()` method of a registry entry to send non Reinforcement Learning related data.\n",
"More information available [here](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Python-API.md#communicating-additional-information-with-the-environment)\n",
"More information available [here](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Python-API.md#communicating-additional-information-with-the-environment)\n",
"\n",
"\n",
"\n"
Expand All @@ -189,7 +189,7 @@
},
"source": [
"### Engine Configuration SideChannel\n",
"The [Engine Configuration Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Python-API.md#engineconfigurationchannel) is used to configure how the Unity Engine should run.\n",
"The [Engine Configuration Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Python-API.md#engineconfigurationchannel) is used to configure how the Unity Engine should run.\n",
"We will use the GridWorld environment to demonstrate how to use the EngineConfigurationChannel."
]
},
Expand Down Expand Up @@ -282,7 +282,7 @@
},
"source": [
"### Environment Parameters Channel\n",
"The [Environment Parameters Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Python-API.md#environmentparameters) is used to modify environment parameters during the simulation.\n",
"The [Environment Parameters Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Python-API.md#environmentparameters) is used to modify environment parameters during the simulation.\n",
"We will use the GridWorld environment to demonstrate how to use the EngineConfigurationChannel."
]
},
Expand Down Expand Up @@ -419,7 +419,7 @@
},
"source": [
"### Creating your own Side Channels\n",
"You can send various kinds of data between a Unity Environment and Python but you will need to [create your own implementation of a Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Custom-SideChannels.md#custom-side-channels) for advanced use cases.\n"
"You can send various kinds of data between a Unity Environment and Python but you will need to [create your own implementation of a Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Custom-SideChannels.md#custom-side-channels) for advanced use cases.\n"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions colab/Colab_UnityEnvironment_4_SB3VectorEnv.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
},
"source": [
"# ML-Agents run with Stable Baselines 3\n",
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
]
},
{
Expand Down Expand Up @@ -127,7 +127,7 @@
" import mlagents\n",
" print(\"ml-agents already installed\")\n",
"except ImportError:\n",
" !python -m pip install -q mlagents==1.0.0\n",
" !python -m pip install -q mlagents==1.1.0\n",
" print(\"Installed ml-agents\")"
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,24 +28,24 @@ The ML-Agents Extensions package is not currently available in the Package Manag
recommended ways to install the package:

### Local Installation
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/docs/Installation.md#advanced-local-installation-for-development-1)
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/docs/Installation.md#advanced-local-installation-for-development-1)
directions (substituting `com.unity.ml-agents.extensions` for the package name).

### Github via Package Manager
In Unity 2019.4 or later, open the Package Manager, hit the "+" button, and select "Add package from git URL".

![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/images/unity_package_manager_git_url.png)
![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/unity_package_manager_git_url.png)

In the dialog that appears, enter
```
git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_21
git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_22
```

You can also edit your project's `manifest.json` directly and add the following line to the `dependencies`
section:
```
"com.unity.ml-agents.extensions": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_21",
"com.unity.ml-agents.extensions": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_22",
```
See [Git dependencies](https://docs.unity3d.com/Manual/upm-git.html#subfolder) for more information. Note that this
may take several minutes to resolve the packages the first time that you add it.
Expand All @@ -67,4 +67,4 @@ If using the `InputActuatorComponent`
- No way to customize the action space of the `InputActuatorComponent`

## Need Help?
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/README.md) contains links for contacting the team or getting support.
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/README.md) contains links for contacting the team or getting support.
4 changes: 2 additions & 2 deletions com.unity.ml-agents/Runtime/Academy.cs
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
* API. For more information on each of these entities, in addition to how to
* set-up a learning environment and train the behavior of characters in a
* Unity scene, please browse our documentation pages on GitHub:
* https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/docs/
* https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/docs/
*/

namespace Unity.MLAgents
Expand Down Expand Up @@ -61,7 +61,7 @@ void FixedUpdate()
/// fall back to inference or heuristic decisions. (You can also set agents to always use
/// inference or heuristics.)
/// </remarks>
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/" +
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/" +
"docs/Learning-Environment-Design.md")]
public class Academy : IDisposable
{
Expand Down
2 changes: 1 addition & 1 deletion com.unity.ml-agents/Runtime/Actuators/IActionReceiver.cs
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ public interface IActionReceiver
///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <seealso cref="IActionReceiver.OnActionReceived"/>
void WriteDiscreteActionMask(IDiscreteActionMask actionMask);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ public interface IDiscreteActionMask
///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#masking-discrete-actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#masking-discrete-actions
/// </remarks>
/// <param name="branch">The branch for which the actions will be masked.</param>
/// <param name="actionIndex">Index of the action.</param>
Expand Down
26 changes: 13 additions & 13 deletions com.unity.ml-agents/Runtime/Agent.cs
Original file line number Diff line number Diff line change
Expand Up @@ -192,13 +192,13 @@ public override BuiltInActuatorType GetBuiltInActuatorType()
/// [OnDisable()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnDisable.html]
/// [OnBeforeSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnBeforeSerialize.html
/// [OnAfterSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnAfterSerialize.html
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design.md
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design.md
/// [Unity ML-Agents Toolkit]: https://github.com/Unity-Technologies/ml-agents
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Readme.md
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Readme.md
///
/// </remarks>
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/" +
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/" +
"docs/Learning-Environment-Design-Agents.md")]
[Serializable]
[RequireComponent(typeof(BehaviorParameters))]
Expand Down Expand Up @@ -728,8 +728,8 @@ public int CompletedEpisodes
/// for information about mixing reward signals from curiosity and Generative Adversarial
/// Imitation Learning (GAIL) with rewards supplied through this method.
///
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// </remarks>
/// <param name="reward">The new value of the reward.</param>
public void SetReward(float reward)
Expand All @@ -756,8 +756,8 @@ public void SetReward(float reward)
/// for information about mixing reward signals from curiosity and Generative Adversarial
/// Imitation Learning (GAIL) with rewards supplied through this method.
///
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
///</remarks>
/// <param name="increment">Incremental reward value.</param>
public void AddReward(float increment)
Expand Down Expand Up @@ -945,8 +945,8 @@ public virtual void Initialize() { }
/// implementing a simple heuristic function can aid in debugging agent actions and interactions
/// with its environment.
///
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// </remarks>
/// <example>
Expand Down Expand Up @@ -1203,7 +1203,7 @@ void ResetSensors()
/// For more information about observations, see [Observations and Sensors].
///
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
/// </remarks>
public virtual void CollectObservations(VectorSensor sensor)
{
Expand Down Expand Up @@ -1245,7 +1245,7 @@ public ReadOnlyCollection<float> GetStackedObservations()
///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <seealso cref="IActionReceiver.OnActionReceived"/>
public virtual void WriteDiscreteActionMask(IDiscreteActionMask actionMask) { }
Expand Down Expand Up @@ -1312,7 +1312,7 @@ public virtual void WriteDiscreteActionMask(IDiscreteActionMask actionMask) { }
///
/// For more information about implementing agent actions see [Agents - Actions].
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </para>
/// </remarks>
/// <param name="actions">
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ namespace Unity.MLAgents.Demonstrations
/// See [Imitation Learning - Recording Demonstrations] for more information.
///
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
/// </remarks>
[RequireComponent(typeof(Agent))]
[AddComponentMenu("ML Agents/Demonstration Recorder", (int)MenuGroup.Default)]
Expand Down
8 changes: 4 additions & 4 deletions docs/Installation-Anaconda-Windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,10 +123,10 @@ commands in an Anaconda Prompt _(if you open a new prompt, be sure to activate
the ml-agents Conda environment by typing `activate ml-agents`)_:

```sh
git clone --branch release_21 https://github.com/Unity-Technologies/ml-agents.git
git clone --branch release_22 https://github.com/Unity-Technologies/ml-agents.git
```

The `--branch release_21` option will switch to the tag of the latest stable
The `--branch release_22` option will switch to the tag of the latest stable
release. Omitting that will get the `main` branch which is potentially
unstable.

Expand All @@ -151,7 +151,7 @@ config files in this directory when running `mlagents-learn`. Make sure you are
connected to the Internet and then type in the Anaconda Prompt:

```console
python -m pip install mlagents==1.0.0
python -m pip install mlagents==1.1.0
```

This will complete the installation of all the required Python packages to run
Expand All @@ -162,7 +162,7 @@ pip will get stuck when trying to read the cache of the package. If you see
this, you can try:

```console
python -m pip install mlagents==1.0.0 --no-cache-dir
python -m pip install mlagents==1.1.0 --no-cache-dir
```

This `--no-cache-dir` tells the pip to disable the cache.
Expand Down
Loading

0 comments on commit ac576f9

Please sign in to comment.