Skip to content

Commit ceda6f4

Browse files
committed
Support Quest teleop for G1 locomanipulation example (#350)
Support Quest3 for loco-manipulation teleop and record/replay - What was the reason for the change? Support Quest3 for loco-manipulation teleop for g1_pink example. This allows proper data recording for the locomanipulation. - What has been changed? Adds openxr + g1_pink teleop support Adds documention for locomanip data recording with Quest
1 parent 8563365 commit ceda6f4

File tree

16 files changed

+745
-60
lines changed

16 files changed

+745
-60
lines changed

docker/run_docker.sh

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,9 @@ SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
77

88
WORKDIR="/workspaces/isaaclab_arena"
99

10+
# Default OpenXR directory shared with CloudXR runtime (lives in IsaacLab submodule)
11+
OPENXR_HOST_DIR="./submodules/IsaacLab/openxr"
12+
1013
# Default mount directory on the host machine for the datasets
1114
DATASETS_HOST_MOUNT_DIRECTORY="$HOME/datasets"
1215
# Default mount directory on the host machine for the models
Lines changed: 3 additions & 0 deletions
Loading
Lines changed: 3 additions & 0 deletions
Loading

docs/pages/example_workflows/locomanipulation/index.rst

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
G1 Loco-Manipulation Box Pick and Place Task
22
============================================
33

4-
This example demonstrates the complete workflow for the **G1 loco-manipulation box pick and place task** in Isaac Lab - Arena, covering environment setup and validation, data generation, policy post-training, and closed-loop evaluation.
4+
This example demonstrates the complete workflow for the **G1 loco-manipulation box pick and place task** in Isaac Lab - Arena, covering environment setup and validation, teleoperation data collection (OpenXR with Meta Quest 3), data generation, policy post-training, and closed-loop evaluation.
55

66
.. image:: ../../../images/g1_galileo_arena_box_pnp_locomanip.gif
77
:align: center
@@ -56,7 +56,7 @@ including lower body locomotion, squatting, and bimanual manipulation.
5656
Workflow
5757
--------
5858

59-
This tutorial covers the pipeline between creating an environment, generating training data,
59+
This tutorial covers the pipeline between creating an environment, collecting teleoperation demonstrations, generating training data,
6060
fine-tuning a policy (GR00T N1.6), and evaluating the policy in closed-loop.
6161
A user can follow the whole pipeline, or can start at any intermediate step
6262
by downloading the pre-generated output of the preceding step(s), which we provide
@@ -90,16 +90,18 @@ Workflow Steps
9090
Follow the following steps to complete the workflow:
9191

9292
- :doc:`step_1_environment_setup`
93-
- :doc:`step_2_data_generation`
94-
- :doc:`step_3_policy_training`
95-
- :doc:`step_4_evaluation`
93+
- :doc:`step_2_teleoperation`
94+
- :doc:`step_3_data_generation`
95+
- :doc:`step_4_policy_training`
96+
- :doc:`step_5_evaluation`
9697

9798

9899
.. toctree::
99100
:maxdepth: 1
100101
:hidden:
101102

102103
step_1_environment_setup
103-
step_2_data_generation
104-
step_3_policy_training
105-
step_4_evaluation
104+
step_2_teleoperation
105+
step_3_data_generation
106+
step_4_policy_training
107+
step_5_evaluation
Lines changed: 176 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,176 @@
1+
Teleoperation Data Collection
2+
-----------------------------
3+
4+
This workflow covers collecting demonstrations for the G1 loco-manipulation task using **Meta Quest 3** supported by **NVIDIA CloudXR**.
5+
6+
This workflow requires several components to run:
7+
8+
* **NVIDIA CloudXR Runtime**: Runs in a Docker container on your workstation and streams the Isaac Lab simulation to a compatible XR device. See the `CloudXR Runtime documentation <https://docs.nvidia.com/cloudxr-sdk/latest/usr_guide/cloudxr_runtime/index.html>`_.
9+
* **Arena Docker container**: Runs the Isaac Lab simulation and recording.
10+
* **CloudXR.js WebServer**: Meta Quest 3 and Pico 4 Ultra connect to Isaac Lab via the CloudXR.js WebXR client. See `CloudXR.js (Early Access) <https://docs.nvidia.com/cloudxr-sdk/latest/usr_guide/cloudxr_js/index.html>`_.
11+
12+
.. note::
13+
14+
You must join the **NVIDIA CloudXR Early Access Program** to obtain the CloudXR runtime and client:
15+
16+
* **CloudXR Early Access**: `Join the NVIDIA CloudXR SDK Early Access Program <https://developer.nvidia.com/cloudxr-sdk-early-access-program/join>`_
17+
18+
Follow the steps in the confirmation email to get access to the CloudXR runtime container and client resources.
19+
20+
21+
Step 1: Start the CloudXR Runtime Container
22+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
23+
24+
#. Download the **CloudXR Runtime Container** from NVIDIA NGC. Version **6.0.1** is tested.
25+
26+
.. code-block:: bash
27+
28+
docker login nvcr.io
29+
docker pull nvcr.io/nvidia/cloudxr-runtime-early-access:6.0.1-webrtc
30+
31+
#. In a new terminal, start the CloudXR runtime container:
32+
33+
.. code-block:: bash
34+
35+
cd submodules/IsaacLab
36+
mkdir -p openxr
37+
38+
docker run -dit --rm --name cloudxr-runtime \
39+
--user $(id -u):$(id -g) \
40+
--gpus=all \
41+
-e "ACCEPT_EULA=Y" \
42+
--mount type=bind,src=$(pwd)/openxr,dst=/openxr \
43+
--network host \
44+
nvcr.io/nvidia/cloudxr-runtime-early-access:6.0.1-webrtc
45+
46+
47+
Step 2: Start Arena Teleop
48+
^^^^^^^^^^^^^^^^^^^^^^^^^^
49+
50+
In another terminal, start the Arena Docker container and launch the teleop session to verify the pipeline:
51+
52+
:docker_run_default:
53+
54+
.. code-block:: bash
55+
56+
python isaaclab_arena/scripts/imitation_learning/teleop.py \
57+
--enable_pinocchio \
58+
galileo_g1_locomanip_pick_and_place \
59+
--teleop_device openxr
60+
61+
Start the AR/XR session from the **AR** tab in the application window.
62+
63+
.. figure:: ../../../images/locomanip_arena_server.png
64+
:width: 100%
65+
:alt: Arena teleop with XR running (stereoscopic view and OpenXR settings)
66+
:align: center
67+
68+
Arena teleop session with XR running. Stereoscopic view (left) and OpenXR settings in the AR tab (right).
69+
70+
71+
Step 3: Build and Run the CloudXR.js WebServer
72+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
73+
74+
#. Download the `CloudXR.js with samples <https://catalog.ngc.nvidia.com/orgs/nvidia/resources/cloudxr-js-early-access?version=6.0.1-beta>`_, unzip and follow the included guide.
75+
76+
#. Start the CloudXR.js WebServer:
77+
78+
.. code-block:: bash
79+
80+
cd cloudxr-js-early-access_6.0.1-beta/release
81+
docker build -t cloudxr-isaac-sample --build-arg EXAMPLE_NAME=isaac .
82+
docker run -d --name cloudxr-isaac-sample -p 8080:80 -p 8443:443 cloudxr-isaac-sample
83+
84+
You can test from a local browser at ``http://localhost:8080/`` before connecting the Quest.
85+
86+
.. figure:: ../../../images/locomanip_cloudxr_js.png
87+
:width: 100%
88+
:alt: CloudXR.js Isaac Lab Teleop Client (connection and debug settings)
89+
:align: center
90+
91+
CloudXR.js Isaac Lab Teleop Client. Configure server IP and port, then press **Connect**. Adjust stream resolution and reference space in Debug Settings if needed.
92+
93+
Step 4: Setup and Connect from Meta Quest 3
94+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
95+
96+
#. On the host machine, update the firewall to allow traffic on these ports:
97+
98+
.. code-block:: bash
99+
100+
sudo ufw allow 49100/tcp
101+
sudo ufw allow 47998/udp
102+
103+
#. **Network**: Use a router with Wi-Fi 6 (5 GHz band). Connect the server via Ethernet and the Quest to the same router's Wi-Fi. See the `CloudXR Network Setup <https://docs.nvidia.com/cloudxr-sdk/latest/requirement/network_setup.html>`_ guide.
104+
105+
#. **Quest configuration**: On the Quest headset, configure insecure origins for HTTP mode (one-time setup):
106+
107+
* Open the Meta Quest 3 browser and go to ``chrome://flags``.
108+
* Search for ``insecure``, find ``unsafely-treat-insecure-origin-as-secure``, and set it to **Enabled**.
109+
* In the text field, enter your Arena host URL: ``http://<server-ip>:8080``.
110+
* Tap outside the text field; a **Relaunch** button appears. Tap **Relaunch** to apply.
111+
* After relaunch, return to ``chrome://flags`` and confirm the flag is still enabled and the URL is saved.
112+
113+
#. **Connect**: On the Quest, open the browser and go to ``http://<server-ip>:8080``. In Settings, enter the server IP, then press **Connect**. You should see the simulation and be able to teleoperate.
114+
115+
The browser will prompt for WebXR permissions the first time. Select **Allow**; the immersive session starts after permission is granted.
116+
117+
#. **Teleoperation Controls**:
118+
119+
* **Left joystick**: Move the body forward/backward/left/right.
120+
* **Right joystick**: Squat (down), rotate torso (left/right).
121+
* **Controllers**: Move end-effector (EE) targets for the arms.
122+
123+
124+
Step 5: Record with Quest 3
125+
^^^^^^^^^^^^^^^^^^^^^^^^^^^
126+
127+
#. **Recording**: When ready to collect data, run the recording script from the Arena container:
128+
129+
.. code-block:: bash
130+
131+
export DATASET_DIR=/datasets/isaaclab_arena/locomanipulation_tutorial
132+
mkdir -p $DATASET_DIR
133+
134+
# Record demonstrations with OpenXR teleop
135+
python isaaclab_arena/scripts/imitation_learning/record_demos.py \
136+
--device cpu \
137+
--enable_pinocchio \
138+
--dataset_file $DATASET_DIR/arena_g1_locomanipulation_dataset_recorded.hdf5 \
139+
--num_demos 10 \
140+
--num_success_steps 2 \
141+
galileo_g1_locomanip_pick_and_place \
142+
--teleop_device openxr
143+
144+
#. Complete the task for each demo. Reset between demos. The script saves successful runs to the HDF5 file above.
145+
146+
.. hint::
147+
148+
Suggested sequence for the task:
149+
150+
#. Align your body with the robot.
151+
#. Walk forward (left joystick forward).
152+
#. Grab the box (controllers).
153+
#. Walk backward (left joystick back).
154+
#. Turn toward the bin (right joystick).
155+
#. Walk forward to the bin.
156+
#. Squat (right joystick down).
157+
#. Place the box in the bin (controllers).
158+
159+
.. image:: ../../../images/g1_galileo_arena_box_pnp_locomanip.gif
160+
:align: center
161+
:height: 400px
162+
163+
164+
Step 6: Replay Recorded Demos (Optional)
165+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
166+
167+
To replay the recorded demos:
168+
169+
.. code-block:: bash
170+
171+
# Replay from the recorded HDF5 dataset
172+
python isaaclab_arena/scripts/imitation_learning/replay_demos.py \
173+
--device cpu \
174+
--dataset_file $DATASET_DIR/arena_g1_locomanipulation_dataset_recorded.hdf5 \
175+
--enable_pinocchio \
176+
galileo_g1_locomanip_pick_and_place
Lines changed: 124 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,124 @@
1+
Data Generation
2+
---------------
3+
4+
This workflow covers annotating and generating the demonstration dataset using
5+
`Isaac Lab Mimic <https://isaac-sim.github.io/IsaacLab/main/source/overview/imitation-learning/teleop_imitation.html>`_.
6+
7+
8+
**Docker Container**: Base (see :doc:`../../quickstart/docker_containers` for more details)
9+
10+
:docker_run_default:
11+
12+
13+
14+
Step 1: Annotate Demonstrations
15+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
16+
17+
This step describes how to annotate the demonstrations recorded in the preceding step
18+
so they can be used by Isaac Lab Mimic. For more details on Mimic annotation, see the
19+
`Isaac Lab Mimic documentation <https://isaac-sim.github.io/IsaacLab/main/source/overview/imitation-learning/teleop_imitation.html#annotate-the-demonstrationsl>`_.
20+
21+
To skip this step, you can download the pre-annotated dataset from Hugging Face as described below.
22+
23+
.. dropdown:: Download Pre-annotated Dataset (skip annotation step)
24+
:animate: fade-in
25+
26+
These commands can be used to download the pre-annotated dataset,
27+
such that the annotation step can be skipped.
28+
29+
To download run:
30+
31+
.. code-block:: bash
32+
33+
hf download \
34+
nvidia/Arena-G1-Loco-Manipulation-Task \
35+
arena_g1_loco_manipulation_dataset_annotated.hdf5 \
36+
--repo-type dataset \
37+
--local-dir $DATASET_DIR
38+
39+
To start the annotation process, run the following command:
40+
41+
.. code-block:: bash
42+
43+
python isaaclab_arena/scripts/imitation_learning/annotate_demos.py \
44+
--device cpu \
45+
--input_file $DATASET_DIR/arena_g1_locomanipulation_dataset_recorded.hdf5 \
46+
--output_file $DATASET_DIR/arena_g1_locomanipulation_dataset_annotated.hdf5 \
47+
--enable_pinocchio \
48+
--mimic \
49+
galileo_g1_locomanip_pick_and_place
50+
51+
Follow the instructions described on the CLI to complete the annotation.
52+
53+
54+
55+
Step 2: Generate Augmented Dataset
56+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
57+
58+
Isaac Lab Mimic generates additional demonstrations from the annotated demonstrations
59+
by applying object and trajectory transformations to introduce data variations.
60+
61+
This step can be skipped by downloading the pre-generated dataset from Hugging Face as described below.
62+
63+
.. dropdown:: Download Pre-generated Dataset (skip data generation step)
64+
:animate: fade-in
65+
66+
These commands can be used to download the pre-generated dataset,
67+
such that the data generation step can be skipped.
68+
69+
.. code-block:: bash
70+
71+
hf download \
72+
nvidia/Arena-G1-Loco-Manipulation-Task \
73+
arena_g1_loco_manipulation_dataset_generated.hdf5 \
74+
--repo-type dataset \
75+
--local-dir $DATASET_DIR
76+
77+
Generate the dataset:
78+
79+
.. code-block:: bash
80+
81+
# Generate 100 demonstrations
82+
python isaaclab_arena/scripts/imitation_learning/generate_dataset.py \
83+
--headless \
84+
--enable_cameras \
85+
--mimic \
86+
--input_file $DATASET_DIR/arena_g1_loco_manipulation_dataset_annotated.hdf5 \
87+
--output_file $DATASET_DIR/arena_g1_loco_manipulation_dataset_generated.hdf5 \
88+
--generation_num_trials 100 \
89+
--device cpu \
90+
galileo_g1_locomanip_pick_and_place \
91+
--object brown_box \
92+
--embodiment g1_wbc_pink
93+
94+
Data generation takes 1-4 hours depending on your CPU/GPU.
95+
You can remove ``--headless`` to visualize during data generation.
96+
97+
98+
Step 3: Validate Generated Dataset (Optional)
99+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
100+
101+
To visualize the data produced, you can replay the dataset using the following command:
102+
103+
.. code-block:: bash
104+
105+
python isaaclab_arena/scripts/imitation_learning/replay_demos.py \
106+
--device cpu \
107+
--enable_cameras \
108+
--dataset_file $DATASET_DIR/arena_g1_loco_manipulation_dataset_generated.hdf5 \
109+
galileo_g1_locomanip_pick_and_place \
110+
--object brown_box \
111+
--embodiment g1_wbc_pink
112+
113+
You should see the robot successfully perform the task.
114+
115+
.. figure:: ../../../images/g1_locomanip_pick_and_place_task_view.png
116+
:width: 100%
117+
:alt: G1 Locomanip Pick and Place Task View
118+
:align: center
119+
120+
IsaacLab Arena G1 Locomanip Pick and Place Task View
121+
122+
.. note::
123+
124+
The dataset was generated using CPU device physics, therefore the replay uses ``--device cpu`` to ensure reproducibility.

docs/pages/example_workflows/locomanipulation/step_3_policy_training.rst renamed to docs/pages/example_workflows/locomanipulation/step_4_policy_training.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Once inside the container, set the dataset and models directories.
1616
export MODELS_DIR=/models/isaaclab_arena/locomanipulation_tutorial
1717
1818
Note that this tutorial assumes that you've completed the
19-
:doc:`preceding step (Data Generation) <step_2_data_generation>` or downloaded the pre-generated dataset.
19+
:doc:`preceding step (Data Generation) <step_3_data_generation>` or downloaded the pre-generated dataset.
2020

2121
.. dropdown:: Download Pre-generated Dataset (skip preceding steps)
2222
:animate: fade-in

docs/pages/example_workflows/locomanipulation/step_4_evaluation.rst renamed to docs/pages/example_workflows/locomanipulation/step_5_evaluation.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Once inside the container, set the dataset and models directories.
1717
export MODELS_DIR=/models/isaaclab_arena/locomanipulation_tutorial
1818
1919
Note that this tutorial assumes that you've completed the
20-
:doc:`preceding step (Policy Training) <step_3_policy_training>` or downloaded the
20+
:doc:`preceding step (Policy Training) <step_4_policy_training>` or downloaded the
2121
pre-trained model checkpoint below:
2222

2323
.. dropdown:: Download Pre-trained Model (skip preceding steps)

isaaclab_arena/assets/asset_registry.py

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,13 @@ def get_teleop_device_cfg(self, device: type["TeleopDeviceBase"], embodiment: ob
133133
retargeter_key_str = retargeter_registry.convert_tuple_to_str(retargeter_key)
134134
retargeter = retargeter_registry.get_component_by_name(retargeter_key_str)()
135135
retargeter_cfg = retargeter.get_retargeter_cfg(embodiment, sim_device=device.sim_device)
136-
retargeters = [retargeter_cfg] if retargeter_cfg is not None else []
136+
# Handle both single retargeter and list of retargeters
137+
if isinstance(retargeter_cfg, list):
138+
retargeters = retargeter_cfg
139+
elif retargeter_cfg is not None:
140+
retargeters = [retargeter_cfg]
141+
else:
142+
retargeters = []
137143
device_cfg = device.get_device_cfg(retargeters=retargeters, embodiment=embodiment)
138144
return DevicesCfg(
139145
devices={

0 commit comments

Comments
 (0)