This repository shows the training code for paper "Learning Point-to-Point Bipedal Walking Without Global Navigation" with Isaaclab.
-
Install Isaac Lab by following the installation guide. Please use Isaaclab v2.1.0 with IsaacSim 4.5.0
-
Using a python interpreter that has IsaacLab installed, install the library
cd Bipedal-p2p-walk
python -m pip install -e source/AzureLoong
Training and play an agent with RSL-RL on a bipedal robot AzureLoong:
python scripts/rsl_rl/train.py --task=walk_p2p_s1 --headless
python scripts/rsl_rl/play.py --task=walk_p2p_s1 --num_envs 5
The project is mainly organized as follows:
Bipedal-p2p-walk/
├── cmd.txt
├── scripts/
│ └── rsl_rl/
│ ├── cli_args.py
│ ├── export.py
│ ├── play.py
│ └── train.py
└── source/
└── AzureLoong/
├── AzureLoong/
├── assets/
│ ├── AzureLoong.py
│ ├── __init__.py
│ └── Robots/
│ ├── AzureLoong_shortFeet.usd
│ └── configuration/
├── tasks/
└── flat_walk/
├── agents/
│ └── rsl_rl_ppo_cfg.py
├── base_scripts/
│ ├── cfg_base.py
│ └── env_base.py
├── cfg_p2p_s1.py
├── cfg_p2p_s2.py
└── env_p2p_s1.py
The robot asset file is store as AzureLoong_shortFeet.usd.
Joint configurations such as stiffness and damping are configured in AzureLoong.py.
Environment settings, reward functions and corresponding scales are defined in env_p2p_s1.py and cfg_p2p_s1.py. cfg_p2p_s2.py is the second stage training config with more domain randomization.
PPO parameters are configured in rsl_rl_ppo_cfg.py.
Humanoid-Gym: Reinforcement Learning for Humanoid Robot with Zero-Shot Sim2Real Transfer. https://github.com/roboterax/humanoid-gym