This is an official implementation of GentleHumanoid, more details please check our Project page.
This repo provides the codebase for deploying GentleHumanoid policies on both simulation and real robots. For training code and data, please check here.
GentleHumanoid learns a universal whole-body motion control policy with upper-body compliance and adjustable force limits, allowing smooth, stable, and safe interactions with humans and objects.
Key Features:
- Compliance: Coordinated responses across the shoulder, elbow, and wrist for adaptive motion.
- Unified interaction modeling: Handles both resistive and human-guided contact forces.
- Safety-aware control: Supports tunable force thresholds to ensure safe human–robot interaction.
- Universal and Robust: Demonstrated in simulation and on the Unitree G1, generalizing across diverse motions and interactions.
Try Our Online Demo
online_demo.mp4
- Release sim2sim, sim2real code
- Release pretrained model, that can deploy customized motions to G1
- Release training code and data
- Release full pipeline from RGB video to G1 deployment
- Release locomotion and motion-tracking switching module
- More upon request
Clone the repo:
git clone https://github.com/Axellwppr/gentle-humanoid
cd gentle-humanoid
- Create conda environment
conda create -n gentle python=3.10 conda activate gentle - Install the Unitree SDK2 Python bindings in virtual environment (follow the official Unitree guide)
- Install Python deps:
pip install -r requirements.txt
Tested on Ubuntu 22.04 with Python 3.10
- Start the simulator (state publisher + keyboard bridge):
Leave the terminal focused so the keyboard mapping works.
python3 src/sim2sim.py --xml_path assets/g1/g1.xml
- In another terminal launch the high-level controller:
python3 src/deploy.py --net lo --sim2sim
- Flow:
- Controller waits in zero-torque mode until it receives the simulated state.
- Press
sin the sim terminal to let the robot move to the default pose. - Press
ain the sim terminal to start tracking policy - See Motion Switching to replay different motions.
- Use
uanddto increase/decrease the force threshold (default 10N). - Press
xto exit gracefully.
You can double-click a link in the simulation window and Ctrl + right-drag to apply an external force to that link.
sim2sim.mp4
Please read official document before you work on G1.
- Power on G1 and connect to your PC.
- Turn on Controller, press L2+R2 for debugging mode. Run Sim2Real:
python3 src/deploy.py --net <robot_iface> --real
- The state machine matches Sim2Sim but with
physical remote controllerinput- Zero torque
- (Press
start) → move to default pose - Place robot on the ground
- (Press
A) → run the active policy - See Motion Switching to replay different motions.
- Use
upanddownbuttons to increase/decrease the force threshold (default 10N). - (Press
select) → exit gracefully
select) ready.
- The tracking policy accepts motion change commands while it is active.
- Open a terminal and run the motion selector:
python3 src/motion_select.py
- Usage tips:
- Type the motion name or its index (
listprints the menu). Press Enter with an empty line to resend the previous choice. rreloads the YAML file if you edit it;qexits the selector.
- Type the motion name or its index (
- Selection rules:
- The policy only starts a new motion when the current clip has finished and the robot is in the
defaultclip (or you explicitly requestdefault). - Sending
defaultalways fades back to the idle pose.
- The policy only starts a new motion when the current clip has finished and the robot is in the
selection.mp4
- Visualization is built upon MuJoCo and MUJOCO WASM.
- RL framework from active-adaptation.
- Sim2Real implementation is based on Stanford-TML/real_g1.
- Motion retargeting uses GMR.
- SMPL-X motion estimation from video uses PromptHMR.
- Training datasets include AMASS, InterX, and LAFAN.