|
| 1 | +# VR Teleoperation World Frame to Base Frame Transformation |
| 2 | + |
| 3 | +## Overview |
| 4 | + |
| 5 | +This document describes the implementation of automatic coordinate transformation for VR controller inputs (Quest motion controllers) in the G1 WBC Pink embodiment. |
| 6 | + |
| 7 | +## Problem |
| 8 | + |
| 9 | +VR controllers (like Meta Quest) provide end-effector poses in **world coordinates**. However, the robot's IK controller expects poses relative to the **robot's base frame**. Without this transformation: |
| 10 | +- When the robot moves, the hands try to stay at their original world position |
| 11 | +- The hands drift away from the robot's body |
| 12 | +- Teleoperation becomes unusable |
| 13 | + |
| 14 | +## Solution |
| 15 | + |
| 16 | +We implemented a new action term that automatically transforms wrist poses from world frame to robot base frame before passing them to the IK controller. |
| 17 | + |
| 18 | +## Implementation |
| 19 | + |
| 20 | +### 1. New Action Term: `G1DecoupledWBCPinkWorldFrameAction` |
| 21 | + |
| 22 | +**File**: `isaaclab_arena_g1/g1_env/mdp/actions/g1_decoupled_wbc_pink_world_frame_action.py` |
| 23 | + |
| 24 | +This action term extends `G1DecoupledWBCPinkAction` and adds coordinate transformation in the `process_actions()` method. |
| 25 | + |
| 26 | +**Key method**: `_transform_wrist_poses_to_base_frame()` |
| 27 | +- Transforms wrist positions: `wrist_pos_base = R^(-1) * (wrist_pos_world - base_pos)` |
| 28 | +- Transforms wrist orientations: `wrist_quat_base = base_quat^(-1) * wrist_quat_world` |
| 29 | +- Processes both wrists in batch for efficiency |
| 30 | + |
| 31 | +### 2. Configuration: `G1DecoupledWBCPinkWorldFrameActionCfg` |
| 32 | + |
| 33 | +**File**: `isaaclab_arena_g1/g1_env/mdp/actions/g1_decoupled_wbc_pink_world_frame_action_cfg.py` |
| 34 | + |
| 35 | +Adds a new config option: |
| 36 | +```python |
| 37 | +transform_to_base_frame: bool = True |
| 38 | +``` |
| 39 | + |
| 40 | +### 3. Wrapper Action Configuration |
| 41 | + |
| 42 | +**File**: `isaaclab_arena/embodiments/g1/g1.py` |
| 43 | + |
| 44 | +Created `G1WBCPinkWorldFrameActionCfg` class that wraps `G1DecoupledWBCPinkWorldFrameActionCfg` with proper initialization: |
| 45 | +```python |
| 46 | +@configclass |
| 47 | +class G1WBCPinkWorldFrameActionCfg: |
| 48 | + g1_action: ActionTermCfg = G1DecoupledWBCPinkWorldFrameActionCfg( |
| 49 | + asset_name="robot", |
| 50 | + joint_names=[".*"] |
| 51 | + ) |
| 52 | +``` |
| 53 | + |
| 54 | +This ensures the action term has all required fields (`asset_name` and `joint_names`) set. |
| 55 | + |
| 56 | +### 4. G1 Embodiment Update |
| 57 | + |
| 58 | +**File**: `isaaclab_arena/embodiments/g1/g1.py` |
| 59 | + |
| 60 | +Updated `G1WBCPinkEmbodiment.__init__()` to accept: |
| 61 | +```python |
| 62 | +use_world_frame_actions: bool = False |
| 63 | +``` |
| 64 | + |
| 65 | +When `True`, uses `G1WBCPinkWorldFrameActionCfg` instead of the standard `G1WBCPinkActionCfg`. |
| 66 | + |
| 67 | +### 5. Environment Auto-Detection |
| 68 | + |
| 69 | +**File**: `isaaclab_arena_environments/galileo_g1_locomanip_pick_and_place_environment.py` |
| 70 | + |
| 71 | +Automatically enables world frame actions when: |
| 72 | +- Teleop device is `motion_controllers` or `openxr` |
| 73 | +- AND embodiment is `g1_wbc_pink` |
| 74 | + |
| 75 | +```python |
| 76 | +use_world_frame_actions = ( |
| 77 | + args_cli.teleop_device in ["motion_controllers", "openxr"] |
| 78 | + and args_cli.embodiment == "g1_wbc_pink" |
| 79 | +) |
| 80 | +``` |
| 81 | + |
| 82 | +### 6. Dummy Torso Retargeter |
| 83 | + |
| 84 | +**File**: `isaaclab_arena/assets/retargeter_library.py` |
| 85 | + |
| 86 | +Added `DummyTorsoRetargeter` that returns 3 zeros for torso orientation (roll, pitch, yaw). |
| 87 | + |
| 88 | +Updated `G1WbcPinkMotionControllersRetargeter` to return: |
| 89 | +- Upper body retargeter: 16 dims `[gripper(2), left_wrist(7), right_wrist(7)]` |
| 90 | +- Lower body retargeter: 4 dims `[nav_cmd(3), hip_height(1)]` |
| 91 | +- Dummy torso retargeter: 3 dims `[torso_rpy(3)]` |
| 92 | +- **Total**: 23 dims (matches G1 WBC Pink action space) |
| 93 | + |
| 94 | +## Usage |
| 95 | + |
| 96 | +### Command Line |
| 97 | +```bash |
| 98 | +python isaaclab_arena/scripts/imitation_learning/teleop.py \ |
| 99 | + --xr \ |
| 100 | + --num_env 1 \ |
| 101 | + galileo_g1_locomanip_pick_and_place \ |
| 102 | + --teleop_device motion_controllers \ |
| 103 | + --embodiment g1_wbc_pink |
| 104 | +``` |
| 105 | + |
| 106 | +The world frame transformation is automatically enabled! |
| 107 | + |
| 108 | +### Programmatic |
| 109 | +```python |
| 110 | +from isaaclab_arena.embodiments.g1 import G1WBCPinkEmbodiment |
| 111 | + |
| 112 | +# For VR/motion controllers |
| 113 | +embodiment = G1WBCPinkEmbodiment( |
| 114 | + use_world_frame_actions=True |
| 115 | +) |
| 116 | + |
| 117 | +# For other teleop devices |
| 118 | +embodiment = G1WBCPinkEmbodiment( |
| 119 | + use_world_frame_actions=False # default |
| 120 | +) |
| 121 | +``` |
| 122 | + |
| 123 | +## Technical Details |
| 124 | + |
| 125 | +### Action Layout (23 dimensions) |
| 126 | +``` |
| 127 | +[0:1] left_hand_state (0=open, 1=close) |
| 128 | +[1:2] right_hand_state (0=open, 1=close) |
| 129 | +[2:5] left_wrist_pos (x,y,z) |
| 130 | +[5:9] left_wrist_quat (w,x,y,z) |
| 131 | +[9:12] right_wrist_pos (x,y,z) |
| 132 | +[12:16] right_wrist_quat (w,x,y,z) |
| 133 | +[16:19] navigate_cmd (x, y, angular_z) |
| 134 | +[19:20] base_height |
| 135 | +[20:23] torso_orientation_rpy |
| 136 | +``` |
| 137 | + |
| 138 | +### Transformation Math |
| 139 | + |
| 140 | +**Position transformation**: |
| 141 | +``` |
| 142 | +wrist_pos_translated = wrist_pos_world - robot_base_pos |
| 143 | +wrist_pos_base = quat_apply_inverse(robot_base_quat, wrist_pos_translated) |
| 144 | +``` |
| 145 | + |
| 146 | +**Orientation transformation**: |
| 147 | +``` |
| 148 | +robot_base_quat_inv = quat_inv(robot_base_quat) |
| 149 | +wrist_quat_base = quat_mul(robot_base_quat_inv, wrist_quat_world) |
| 150 | +``` |
| 151 | + |
| 152 | +Both wrists are processed in batch for efficiency. |
| 153 | + |
| 154 | +## VR Camera Viewport Configuration |
| 155 | + |
| 156 | +The Quest VR headset viewport automatically follows the robot's first-person view through proper XR configuration. |
| 157 | + |
| 158 | +### G1 Embodiment XR Config |
| 159 | + |
| 160 | +**File**: `isaaclab_arena/embodiments/g1/g1.py` |
| 161 | + |
| 162 | +The G1 embodiment's XR configuration is set up to: |
| 163 | +```python |
| 164 | +self.xr: XrCfg = XrCfg( |
| 165 | + anchor_pos=(0.0, 0.0, -1.0), |
| 166 | + anchor_rot=(0.70711, 0.0, 0.0, -0.70711), |
| 167 | + anchor_prim_path="/World/envs/env_0/Robot/pelvis", # Track robot's pelvis |
| 168 | + fixed_anchor_height=True, # Keep height fixed for comfort |
| 169 | +) |
| 170 | +``` |
| 171 | + |
| 172 | +### Motion Controllers Device Config |
| 173 | + |
| 174 | +**File**: `isaaclab_arena/assets/device_library.py` |
| 175 | + |
| 176 | +The motion controllers device automatically: |
| 177 | +1. Retrieves the XR config from the embodiment |
| 178 | +2. Sets anchor rotation mode to `FOLLOW_PRIM_SMOOTHED` for smooth camera following |
| 179 | +3. Passes it to `OpenXRDeviceCfg` |
| 180 | + |
| 181 | +```python |
| 182 | +def get_device_cfg(self, retargeters, embodiment) -> OpenXRDeviceCfg: |
| 183 | + xr_cfg = embodiment.get_xr_cfg() |
| 184 | + xr_cfg.anchor_rotation_mode = XrAnchorRotationMode.FOLLOW_PRIM_SMOOTHED |
| 185 | + return OpenXRDeviceCfg( |
| 186 | + retargeters=retargeters, |
| 187 | + sim_device=self.sim_device, |
| 188 | + xr_cfg=xr_cfg, # Camera follows robot! |
| 189 | + ) |
| 190 | +``` |
| 191 | + |
| 192 | +**Result**: The VR headset viewport now tracks the robot's pelvis, rotating smoothly as the robot moves and turns, providing a natural first-person view. |
| 193 | + |
| 194 | +## Benefits |
| 195 | + |
| 196 | +1. **Automatic**: No manual coordinate transformation needed |
| 197 | +2. **Clean architecture**: Proper action term instead of ad-hoc padding |
| 198 | +3. **Efficient**: Batch processing of both wrists |
| 199 | +4. **Configurable**: Can be toggled on/off via config |
| 200 | +5. **Auto-detection**: Environment automatically enables it for VR devices |
| 201 | +6. **First-person view**: VR camera follows robot's pelvis for immersive teleoperation |
| 202 | + |
| 203 | +## Files Changed |
| 204 | + |
| 205 | +1. `isaaclab_arena_g1/g1_env/mdp/actions/g1_decoupled_wbc_pink_world_frame_action.py` (new) |
| 206 | +2. `isaaclab_arena_g1/g1_env/mdp/actions/g1_decoupled_wbc_pink_world_frame_action_cfg.py` (new) |
| 207 | +3. `isaaclab_arena_g1/g1_env/mdp/actions/__init__.py` (updated exports) |
| 208 | +4. `isaaclab_arena/embodiments/g1/g1.py` (added `G1WBCPinkWorldFrameActionCfg` wrapper and `use_world_frame_actions` parameter) |
| 209 | +5. `isaaclab_arena_environments/galileo_g1_locomanip_pick_and_place_environment.py` (auto-detection logic) |
| 210 | +6. `isaaclab_arena/assets/retargeter_library.py` (added `DummyTorsoRetargeter`) |
| 211 | +7. `isaaclab_arena/scripts/imitation_learning/teleop.py` (removed dimension padding hack) |
0 commit comments