Zero-Shot Vision-Language-Action Model Evaluation in MuJoCo (OpenVLA).
Using Python 3.10.12
git clone https://github.com/ethancpwoo/mujocoarm_VLA.git
sudo apt-get install -y python3-venv
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
Evaluates zero-shot performance of VLA models on MuJoCo arm environments.
Benchmarks various tasks, such as picking up cube, turning and pushing.
Quantitatively evaluates task success and provides corresponding logs.
With VLAs, requires heavy resources to fine-tune to new data. Exploring new options like LoRA could be interesting in the future if have access to A100 GPUs.
Could add different robots, different objects and make the evaluation more customizable.