Skip to content

Commit f68808f

Browse files
author
Ruo-Ping (Rachel) Dong
authored
Update changelog for release 8 (#4548)
* update changelog before release 8
1 parent 47d4690 commit f68808f

File tree

1 file changed

+26
-5
lines changed

1 file changed

+26
-5
lines changed

com.unity.ml-agents/CHANGELOG.md

Lines changed: 26 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,32 +7,53 @@ and this project adheres to
77
[Semantic Versioning](http://semver.org/spec/v2.0.0.html).
88

99
## [Unreleased]
10+
11+
### Major Changes
12+
#### com.unity.ml-agents (C#)
13+
#### ml-agents / ml-agents-envs / gym-unity (Python)
14+
15+
### Minor Changes
16+
#### com.unity.ml-agents (C#)
17+
#### ml-agents / ml-agents-envs / gym-unity (Python)
18+
19+
### Bug Fixes
20+
#### com.unity.ml-agents (C#)
21+
#### ml-agents / ml-agents-envs / gym-unity (Python)
22+
23+
24+
## [1.5.0-preview] - 2020-10-14
1025
### Major Changes
1126
#### com.unity.ml-agents (C#)
1227
#### ml-agents / ml-agents-envs / gym-unity (Python)
1328
- Added the Random Network Distillation (RND) intrinsic reward signal to the Pytorch
1429
trainers. To use RND, add a `rnd` section to the `reward_signals` section of your
15-
yaml configuration file. [More information here](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-Configuration-File.md#rnd-intrinsic-reward)
30+
yaml configuration file. [More information here](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-Configuration-File.md#rnd-intrinsic-reward) (#4473)
1631
### Minor Changes
1732
#### com.unity.ml-agents (C#)
18-
- Stacking for compressed observations is now supported. An addtional setting
33+
- Stacking for compressed observations is now supported. An additional setting
1934
option `Observation Stacks` is added in editor to sensor components that support
2035
compressed observations. A new class `ISparseChannelSensor` with an
2136
additional method `GetCompressedChannelMapping()`is added to generate a mapping
2237
of the channels in compressed data to the actual channel after decompression,
2338
for the python side to decompress correctly. (#4476)
24-
- Added new visual 3DBall environment. (#4513)
39+
- Added a new visual 3DBall environment. (#4513)
2540
#### ml-agents / ml-agents-envs / gym-unity (Python)
2641
- The Communication API was changed to 1.2.0 to indicate support for stacked
2742
compressed observation. A new entry `compressed_channel_mapping` is added to the
2843
proto to handle decompression correctly. Newer versions of the package that wish to
2944
make use of this will also need a compatible version of the Python trainers. (#4476)
30-
- In `VisualFoodCollector` scene, a vector flag representing the frozen state of
45+
- In the `VisualFoodCollector` scene, a vector flag representing the frozen state of
3146
the agent is added to the input observations in addition to the original first-person
3247
camera frame. The scene is able to train with the provided default config file. (#4511)
48+
- Added conversion to string for sampler classes to increase the verbosity of
49+
the curriculum lesson changes. The lesson updates would now output the sampler
50+
stats in addition to the lesson and parameter name to the console. (#4484)
51+
- Localized documentation in Russian is added. Thanks to @SergeyMatrosov for
52+
the contribution. (#4529)
3353
### Bug Fixes
3454
#### com.unity.ml-agents (C#)
35-
- Fixed a bug where accessing the Academy outside of play mode would cause the Academy to get stepped multiple times when in play mode. (#4532)
55+
- Fixed a bug where accessing the Academy outside of play mode would cause the
56+
Academy to get stepped multiple times when in play mode. (#4532)
3657
#### ml-agents / ml-agents-envs / gym-unity (Python)
3758

3859

0 commit comments

Comments
 (0)