|
| 1 | +RL with RoboCasa Benchmark |
| 2 | +==================================== |
| 3 | + |
| 4 | +.. |huggingface| image:: /_static/svg/hf-logo.svg |
| 5 | + :width: 16px |
| 6 | + :height: 16px |
| 7 | + :class: inline-icon |
| 8 | + |
| 9 | +This document provides a comprehensive guide for reinforcement learning training tasks using the RoboCasa benchmark in the RLinf framework. |
| 10 | +RoboCasa is a large-scale robotic learning simulation framework focused on manipulation tasks in kitchen environments, featuring diverse kitchen layouts, objects, and manipulation tasks. |
| 11 | + |
| 12 | +RoboCasa combines realistic kitchen environments with diverse manipulation challenges, making it an ideal benchmark for developing generalizable robotic policies. |
| 13 | +The main goal is to train vision-language-action models capable of performing the following tasks: |
| 14 | + |
| 15 | +1. **Visual Understanding**: Process RGB images from multiple camera viewpoints. |
| 16 | +2. **Language Understanding**: Interpret natural language task instructions. |
| 17 | +3. **Manipulation Skills**: Execute complex kitchen tasks such as pick-and-place, opening/closing doors, and appliance control. |
| 18 | + |
| 19 | +Environment Overview |
| 20 | +-------------------- |
| 21 | + |
| 22 | +**RoboCasa Simulation Platform** |
| 23 | + |
| 24 | +- **Environment**: RoboCasa Kitchen simulation environment (built on robosuite) |
| 25 | +- **Robot**: Panda manipulator with mobile base (PandaOmron), equipped with parallel gripper |
| 26 | +- **Tasks**: 24 atomic kitchen tasks covering multiple categories (excluding NavigateKitchen task that require moving the base) |
| 27 | +- **Observation**: Multi-view RGB images (robot view + wrist camera) + proprioceptive state |
| 28 | +- **Action Space**: 12-dimensional continuous actions |
| 29 | + |
| 30 | + - 3D arm position delta |
| 31 | + - 3D arm rotation delta |
| 32 | + - 1D gripper control (open/close) |
| 33 | + - 4D base control |
| 34 | + - 1D mode selection (control base or arm) |
| 35 | + |
| 36 | +**Task Categories** |
| 37 | + |
| 38 | +RoboCasa provides diverse atomic tasks organized into multiple categories: |
| 39 | + |
| 40 | +*Door Manipulation Tasks*: |
| 41 | + |
| 42 | +- ``OpenSingleDoor``: Open cabinet or microwave door |
| 43 | +- ``CloseSingleDoor``: Close cabinet or microwave door |
| 44 | +- ``OpenDoubleDoor``: Open double cabinet doors |
| 45 | +- ``CloseDoubleDoor``: Close double cabinet doors |
| 46 | +- ``OpenDrawer``: Open drawer |
| 47 | +- ``CloseDrawer``: Close drawer |
| 48 | + |
| 49 | +*Pick and Place Tasks*: |
| 50 | + |
| 51 | +- ``PnPCounterToCab``: Pick from counter and place into cabinet |
| 52 | +- ``PnPCabToCounter``: Pick from cabinet and place on counter |
| 53 | +- ``PnPCounterToSink``: Pick from counter and place in sink |
| 54 | +- ``PnPSinkToCounter``: Pick from sink and place on counter |
| 55 | +- ``PnPCounterToStove``: Pick from counter and place on stove |
| 56 | +- ``PnPStoveToCounter``: Pick from stove and place on counter |
| 57 | +- ``PnPCounterToMicrowave``: Pick from counter and place in microwave |
| 58 | +- ``PnPMicrowaveToCounter``: Pick from microwave and place on counter |
| 59 | + |
| 60 | +*Appliance Control Tasks*: |
| 61 | + |
| 62 | +- ``TurnOnMicrowave``: Turn on microwave |
| 63 | +- ``TurnOffMicrowave``: Turn off microwave |
| 64 | +- ``TurnOnSinkFaucet``: Turn on sink faucet |
| 65 | +- ``TurnOffSinkFaucet``: Turn off sink faucet |
| 66 | +- ``TurnSinkSpout``: Turn sink spout |
| 67 | +- ``TurnOnStove``: Turn on stove |
| 68 | +- ``TurnOffStove``: Turn off stove |
| 69 | + |
| 70 | +*Coffee Making Tasks*: |
| 71 | + |
| 72 | +- ``CoffeeSetupMug``: Setup coffee mug |
| 73 | +- ``CoffeeServeMug``: Serve coffee into mug |
| 74 | +- ``CoffeePressButton``: Press coffee machine button |
| 75 | + |
| 76 | +**Observation Structure** |
| 77 | + |
| 78 | +- **Base Camera Image** (``base_image``): Robot left view (128×128 RGB) |
| 79 | +- **Wrist Camera Image** (``wrist_image``): End-effector view camera (128×128 RGB) |
| 80 | +- **Proprioceptive State** (``state``): 16-dimensional vector containing: |
| 81 | + |
| 82 | + - ``[0:2]`` Robot base position (x, y) |
| 83 | + - ``[2:5]`` Padding zeros |
| 84 | + - ``[5:9]`` End-effector quaternion relative to base |
| 85 | + - ``[9:12]`` End-effector position relative to base |
| 86 | + - ``[12:14]`` Gripper joint velocities |
| 87 | + - ``[14:16]`` Gripper joint positions |
| 88 | + |
| 89 | +**Data Structure** |
| 90 | + |
| 91 | +- **Images**: Base camera RGB tensor ``[batch_size, 3, 128, 128]`` and wrist camera ``[batch_size, 3, 128, 128]`` |
| 92 | +- **State**: Proprioceptive state tensor ``[batch_size, 16]`` |
| 93 | +- **Task Description**: Natural language instructions |
| 94 | +- **Actions**: 7-dimensional continuous actions (position, quaternion, gripper) |
| 95 | +- **Reward**: Sparse reward based on task completion |
| 96 | + |
| 97 | +Algorithm |
| 98 | +--------- |
| 99 | + |
| 100 | +**Core Algorithm Components** |
| 101 | + |
| 102 | +1. **PPO (Proximal Policy Optimization)** |
| 103 | + |
| 104 | + - Advantage estimation using GAE (Generalized Advantage Estimation) |
| 105 | + |
| 106 | + - Policy clipping with ratio limits |
| 107 | + |
| 108 | + - Value function clipping |
| 109 | + |
| 110 | + - Entropy regularization |
| 111 | + |
| 112 | +2. **GRPO (Group Relative Policy Optimization)** |
| 113 | + |
| 114 | + - For every state / prompt the policy generates *G* independent actions |
| 115 | + |
| 116 | + - Compute the advantage of each action by subtracting the group's mean reward. |
| 117 | + |
| 118 | +Dependency Installation |
| 119 | +----------------------- |
| 120 | + |
| 121 | +**Option 1: Docker Image** |
| 122 | + |
| 123 | +Use the Docker image ``rlinf/rlinf:agentic-rlinf0.1-robocasa`` for the experiment. |
| 124 | + |
| 125 | +**Option 2: Custom Environment** |
| 126 | + |
| 127 | +Install dependencies directly in your environment by running the following command: |
| 128 | + |
| 129 | +.. code:: bash |
| 130 | +
|
| 131 | + pip install uv |
| 132 | + bash requirements/install.sh embodied --model openpi --env robocasa |
| 133 | + source .venv/bin/activate |
| 134 | +
|
| 135 | +Dataset Download |
| 136 | +----------------- |
| 137 | + |
| 138 | +.. code:: bash |
| 139 | +
|
| 140 | + python -m robocasa.scripts.download_kitchen_assets # Caution: Assets to be downloaded are around 5GB |
| 141 | +
|
| 142 | +Model Download |
| 143 | +-------------- |
| 144 | + |
| 145 | +.. code-block:: bash |
| 146 | +
|
| 147 | + # Download the model (choose either method) |
| 148 | + # Method 1: Using git clone |
| 149 | + git lfs install |
| 150 | + git clone https://huggingface.co/RLinf/RLinf-Pi0-RoboCasa |
| 151 | + git clone https://huggingface.co/RLinf/RLinf-Pi0-RoboCasa |
| 152 | +
|
| 153 | + # Method 2: Using huggingface-hub |
| 154 | + pip install huggingface-hub |
| 155 | + hf download RLinf/RLinf-Pi0-RoboCasa |
| 156 | + hf download RLinf/RLinf-Pi0-RoboCasa |
0 commit comments