This repository documents a PI0.5 (pi05) fine-tuning workflow built on top of the Physical-Intelligence OpenPI framework.
It keeps the original OpenPI installation + PyTorch support steps, and adds the end-to-end fine-tuning & inference commands
used for the pick_and_feed_headmove220 dataset.
Note: This repo focuses on code and configuration. Model checkpoints are not included.
When cloning, make sure to update submodules:
git clone --recurse-submodules git@github.com:yuyangtu/openpi.git
OpenPI uses uv to manage Python dependencies. After installing uv, run:
GIT_LFS_SKIP_SMUDGE=1 uv sync
GIT_LFS_SKIP_SMUDGE=1 uv pip install -e .NOTE: GIT_LFS_SKIP_SMUDGE=1 is needed to pull LeRobot as a dependency.
If you run into system dependency issues, consider using Docker to simplify setup (see upstream OpenPI “Docker Setup”).
OpenPI provides PyTorch implementations of π₀ and π₀.₅ alongside the original JAX versions.
- Make sure dependencies are up to date:
uv sync- Double check
transformers==4.53.2:
uv pip show transformers- Apply transformers patches:
cp -r ./src/openpi/models_pytorch/transformers_replace/* .venv/lib/python3.11/site-packages/transformers/These patches overwrite several files in transformers to enable:
- AdaRMS support
- correct activation precision control
- KV cache usage without being updated
WARNING (important): With the default uv link mode (hardlink), this may permanently affect the transformers library in your uv cache.
To fully undo: uv cache clean transformers.
To finetune in PyTorch, you typically convert a JAX base model checkpoint to PyTorch format first.
uv run examples/convert_jax_model_to_pytorch.py \
--config_name <config name> \
--checkpoint_dir /path/to/jax/base/model \
--output_path /path/to/pytorch/base/modelThen, specify the converted PyTorch model path in your config via pytorch_weight_path.
uv run examples/convert_jax_model_to_pytorch.py \
--checkpoint_dir /path/to/jax/checkpoint \
--config_name <config name> \
--output_path /path/to/converted/pytorch/checkpointAssume your dataset directory is named:
pick_and_feed_headmove220
This step depends on your machine setup and filesystem layout.
Example (copying from one machine to another):
(base) tu@tams98:~/.cache/huggingface/lerobot$ \
scp -r pick_and_feed_headmove220/ \
tu@tamsgpu6:~/.cache/huggingface/lerobotExpected location after copying:
~/.cache/huggingface/lerobot/pick_and_feed_headmove220
uv run scripts/compute_norm_stats.py \
--config-name pi05_pick_and_feed_headmove220Example command using 2 GPUs:
PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True,max_split_size_mb:512 \
CUDA_VISIBLE_DEVICES=0,1 \
uv run torchrun \
--standalone \
--nnodes=1 \
--nproc_per_node=2 \
scripts/train_pytorch.py \
pi05_pick_and_feed_headmove220 \
--exp-name=pick_and_feed \
--overwriteNotes:
pi05_pick_and_feed_headmove220is the training config name--exp-namecontrols the experiment subdirectory- Outputs/checkpoints are saved under
checkpoints/
CUDA_VISIBLE_DEVICES=2 uv run scripts/serve_policy.py policy:checkpoint \
--policy.config=pi05_pick_and_feed_headmove220 \
--policy.dir=checkpoints/pi05_pick_and_feed_headmove220/pick_and_feed/12000Model checkpoints are NOT included in this repository. Train your own, or provide your own converted base weights.
Expected directory structure (example):
checkpoints/
└── pi05_pick_and_feed_headmove220/
└── pick_and_feed/
└── 12000/
Please refer to the upstream OpenPI repository for license information.