Skip to content

nvidia-isaac/WBC-AGILE

Repository files navigation

AGILE: A Generic Isaac-Lab based Engine for humanoid loco-manipulation learning

Overview

AGILE provides a comprehensive reinforcement learning framework for training whole-body control policies with validated sim-to-real transfer capabilities. Built on NVIDIA Isaac Lab, this toolkit enables researchers and practitioners to develop loco-manipulation behaviors for humanoid robots.

Paper

Documentation

Booster T1 – Stand-Up Booster T1 – Velocity Tracking

Sim

Real

Sim

Real
Unitree G1 – Velocity-Height Tracking Unitree G1 – Sit-Down / Stand-Up

Sim

Real

Sim

Real
Unitree G1 – Teleoperation Unitree G1 – Dancing

Sim

Real

Sim

Real

Key Features

  • Multi-Robot Support: Validated on Booster T1 and Unitree G1 with sim-to-real transfer
  • Teacher-Student Distillation: Train with privileged observations, distill to deployable student policies
  • Self-Contained Tasks: Each task config is a single file; MDP term functions are shared via a common library
  • Evaluation Framework: Random rollouts, deterministic scenarios, motion metrics, HTML reports, W&B integration
  • Sim-to-MuJoCo Transfer: Generic framework for cross-simulator policy validation
  • Remote Training: OSMO workflow support for cluster-based training, evaluation, and sweeps

Quick Start

Prerequisites: Isaac Lab v2.3.2 with Isaac Sim 5.1.

# Install AGILE
export ISAACLAB_PATH=/path/to/IsaacLab
./scripts/setup/install_deps_local.sh

# Train a velocity tracking policy
python scripts/train.py --task Velocity-T1-v0 --num_envs 2048 --headless

# Evaluate the trained policy
python scripts/eval.py --task Velocity-T1-v0 --num_envs 32 --checkpoint <path>

See the full documentation for installation details, training guides, task descriptions, and deployment instructions.

Office Hour and FAQ

We hosted a robotics livestream office hour providing an in-depth walkthrough of the AGILE framework.

Contributing

Please see CONTRIBUTING.md for detailed information on how to contribute to this project.

License

License Information

This repository contains code under two different open-source licenses:

BSD 3-Clause License

The reinforcement learning algorithm library located in agile/algorithms/rsl_rl/ is licensed under the BSD 3-Clause License.

  • Copyright holders: ETH Zurich, NVIDIA CORPORATION & AFFILIATES
  • This portion is based on the RSL_RL library developed at ETH Zurich

Apache License 2.0

All other portions of this repository are licensed under the Apache License 2.0.

  • Copyright holder: NVIDIA CORPORATION & AFFILIATES

For complete license terms, see the LICENCE file.

Core Contributors

Huihua Zhao, Rafael Cathomen, Lionel Gulich, Efe Arda Ongan, Michael Lin, Shalin Jain, Wei Liu, Xinghao Zhu, Vishal Kulkarni, Soha Pouya, Yan Chang

Acknowledgments

We would like to acknowledge the following projects from which parts of the code in this repo are derived:

Citation

If you use AGILE in your research, please cite:

@misc{zhao2026agilecomprehensiveworkflowhumanoid,
      title={AGILE: A Comprehensive Workflow for Humanoid Loco-Manipulation Learning},
      author={Huihua Zhao* and Rafael Cathomen* and Lionel Gulich and Wei Liu and Efe Arda Ongan and Michael Lin and Shalin Jain and Soha Pouya and Yan Chang},
      year={2026},
      eprint={2603.20147},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2603.20147},
}

About

A Comprehensive Workflow for Humanoid Loco-Manipulation Learning

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors