You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Description
Currently, the actions from the policy are directly applied to the
environment and also often fed back to the policy using the last action
as observation.
Doing this can lead to instability during training since applying a
large action can introduce a negative feedback loop.
More specifically, applying a very large action leads to a large
last_action observations, which often results in a large error in the
critic, which can lead to even larger actions being sampled in the
future.
This PR aims to fix this for RSL-RL library, by clipping the actions to
(large) hard limits before applying them to the environment. This
prohibits the actions from growing continuously and greatly improves
training stability.
Fixes#984, #1732, #1999
## Type of change
- Bug fix (non-breaking change which fixes an issue)
- New feature (non-breaking change which adds functionality)
## Checklist
- [x] I have run the [`pre-commit` checks](https://pre-commit.com/) with
`./isaaclab.sh --format`
- [x] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my
feature works
- [x] I have updated the changelog and the corresponding version in the
extension's `config/extension.toml` file
- [x] I have added my name to the `CONTRIBUTORS.md` or my name already
exists there
0 commit comments