This work re-implements and adapts the core Agile But Safe (ABS) strategies and RA value training pipeline from the ABS project, migrating from IsaacGym to IsaacLab.
The system follows the IsaacLab Manager-Based architecture, with logic and evaluation protocols aligned with the ABS methodology, but adjusted to match IsaacLab conventions (e.g., event/reset configuration, sensor pipeline).
| ABS Policy Safer with recovery |
Agile Policy Fast and aggressive |
|
|---|---|---|
| Flat Terrain | ![]() |
![]() |
| Rough Terrain | ![]() |
![]() |
| Low Obstacles | ![]() |
![]() |
The following comparison between ABS and Agile policies is based on over 50k test episodes on flat terrain.
| ABS Policy | Agile Policy |
![]() |
![]() |
The following visualization shows RA values over 2D position grids under different commanded velocities. Warmer colors (red) indicate higher risk, and cooler colors (green) indicate safety.
The framework of Isaaclab https://isaac-sim.github.io/IsaacLab/v2.1.0/
The training and testing of this repo is based on isaaclab docker with version 2.1.0 https://isaac-sim.github.io/IsaacLab/main/source/deployment/docker.html
python version: 3.10.15, numpy version: 1.26.4, torch version: 2.5.1 + cu118
./isaaclab.sh -p ./scripts/reinforcement_learning/rsl_rl/train.py --task=Isaac-Velocity-Flat-Pos-Unitree-Go1-v0 --headless --max_iterations=800 ./isaaclab.sh -p ./scripts/reinforcement_learning/rsl_rl/train.py --task=Isaac-Velocity-Flat-Rec-Unitree-Go1-v0 --headless --max_iterations=800Note:The bool flag needs to be set in play.py
./isaaclab.sh -p ./scripts/reinforcement_learning/rsl_rl/play.py --task=Isaac-Velocity-Flat-Pos-Unitree-Go1-Play-v0 --headless --num_envs=1 --video --enable_cameras --video_length=5000 --video can be used to make videos
This repository is built upon the support and contributions of the following open-source projects. Special thanks to:








