Skip to content

xiaohu-art/whole_body_tracking

 
 

Repository files navigation

BeyondMimic Motion Tracking Code

IsaacSim Isaac Lab Python Linux platform pre-commit License

[Website] [Arxiv] [Video]

Overview

BeyondMimic is a versatile humanoid control framework that provides highly dynamic motion tracking with the state-of-the-art motion quality on real-world deployment and steerable test-time control with guided diffusion-based controllers.

This repo covers the motion tracking training in BeyondMimic. You should be able to train any sim-to-real-ready motion in the LAFAN1 dataset, without tuning any parameters.

For sim-to-sim and sim-to-real deployment, please refer to the motion_tracking_controller.

Installation

  • Install Isaac Lab v2.1.0 by following the installation guide. We recommend using the conda installation as it simplifies calling Python scripts from the terminal.

  • Clone this repository separately from the Isaac Lab installation (i.e., outside the IsaacLab directory):

# Option 1: SSH
git clone git@github.com:xiaohu-art/whole_body_tracking.git

# Option 2: HTTPS
git clone https://github.com/xiaohu-art/whole_body_tracking.git
  • Pull the robot description files from GCS
# Enter the repository
cd whole_body_tracking
# Rename all occurrences of whole_body_tracking (in files/directories) to your_fancy_extension_name
curl -L -o unitree_description.tar.gz https://storage.googleapis.com/qiayuanl_robot_descriptions/unitree_description.tar.gz && \
tar -xzf unitree_description.tar.gz -C source/whole_body_tracking/whole_body_tracking/assets/ && \
rm unitree_description.tar.gz
  • Using a Python interpreter that has Isaac Lab installed, install the library
python -m pip install -e source/whole_body_tracking

Motion Tracking

Motion Preprocessing

Note: The reference motion should be retargeted and use generalized coordinates only.

  • Gather the reference motion datasets (please follow the original licenses), we use the same convention as .csv of Unitree's dataset

    • Unitree-retargeted LAFAN1 Dataset is available on HuggingFace
      hf download lvhaidong/LAFAN1_Retargeting_Dataset --repo-type dataset --local-dir {local_path}
    • Sidekicks are from KungfuBot
    • Christiano Ronaldo celebration is from ASAP.
    • Balance motions are from HuB
  • Convert retargeted motions to include the maximum coordinates information (body pose, body velocity, and body acceleration) via forward kinematics,

# example
python scripts/csv_to_npz.py --input_file LAFAN1/g1/dance1_subject1.csv --input_fps 30 --output_dir LAFAN1/g1/output --output_name dance1_subject1 --headless

python scripts/replay_npz.py --motion_file LAFAN1/g1/output/dance1_subject1.npz
  • Convert PHC retargeted .pkl motions to .npz
python scripts/pkl_to_npz.py --input_file {phc retargeted motion}.pkl --input_fps 30 --output_dir phc-retarget/output --output_name {output_name} --headless

python scripts/replay_npz.py --motion_file phc-retarget/output/{output_name}.npz
  • Process multiple motions from a .pkl file and save collected simulation data back to the original .pkl file
python scripts/pkl_to_pkl.py --input_file motions.pkl --input_fps 30 --output_fps 50 --headless

# Replay specific motion keys from the processed pkl file
python scripts/replay_pkl.py --input_file motions.pkl --motion_key dance1_subject1
python scripts/replay_pkl.py --input_file motions.pkl --motion_key dance1_subject1 --loop
python scripts/replay_pkl.py --input_file motions.pkl --list_motions

Policy Training

  • Train policy by the following command:
python scripts/rsl_rl/train.py --task=Tracking-Flat-G1-v0 \
--motion_file LAFAN1/g1/output/dance1_subject1.npz \
--headless --logger wandb --log_project_name {project_name} --run_name {run_name}

Policy Evaluation

  • Play the trained policy by the following command:
python scripts/rsl_rl/play.py --task=Tracking-Flat-G1-v0 --num_envs=2 --motion_file LAFAN1/g1/output/dance1_subject1.npz

Code Structure

Below is an overview of the code structure for this repository:

  • source/whole_body_tracking/whole_body_tracking/tasks/tracking/mdp This directory contains the atomic functions to define the MDP for BeyondMimic. Below is a breakdown of the functions:

    • commands.py Command library to compute relevant variables from the reference motion, current robot state, and error computations. This includes pose and velocity error calculation, initial state randomization, and adaptive sampling.

    • rewards.py Implements the DeepMimic reward functions and smoothing terms.

    • events.py Implements domain randomization terms.

    • observations.py Implements observation terms for motion tracking and data collection.

    • terminations.py Implements early terminations and timeouts.

  • source/whole_body_tracking/whole_body_tracking/tasks/tracking/tracking_env_cfg.py Contains the environment (MDP) hyperparameters configuration for the tracking task.

  • source/whole_body_tracking/whole_body_tracking/tasks/tracking/config/g1/agents/rsl_rl_ppo_cfg.py Contains the PPO hyperparameters for the tracking task.

  • source/whole_body_tracking/whole_body_tracking/robots Contains robot-specific settings, including armature parameters, joint stiffness/damping calculation, and action scale calculation.

  • scripts Includes utility scripts for preprocessing motion data, training policies, and evaluating trained policies.

This structure is designed to ensure modularity and ease of navigation for developers expanding the project.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 100.0%