This document describes robot-side and human-side data collection workflows in this repository.
Plug the G1 camera USB into the G1 host, then SSH to unitree@192.168.123.164 (password: 123).
Copy deploy_real/server_realsense_zmq_pub.py to ~ on G1, then create a dedicated realsense conda environment and install dependencies manually.
# on local workstation (repo root)
scp deploy_real/server_realsense_zmq_pub.py unitree@192.168.123.164:~/# on g1 after ssh login
conda create -y -n realsense python=3.10
conda activate realsense
python -m pip install --upgrade pip
python -m pip install pyrealsense2 pyzmq numpy opencv-python rich zmqAfter the environment is ready, start the camera publisher with:
# from local workstation
bash scripts/realsense_zmq_pub_g1.shStart teleoperation first, following teleop.md.
Use the teleop recorder:
bash scripts/data_record.sh
# sonic channel
bash scripts/data_record.sh --channel sonicPlug the RealSense USB into the workstation, then wear the RealSense using:
Start teleoperation first, following teleop.md.
Use the human recorder:
bash scripts/data_record_human.sh
# sonic channel
bash scripts/data_record_human.sh --channel sonicBoth recorders use the same basic controls:
r: start/stop one episodeq: quit recorder
By default, both recorders save under:
deploy_real/humdex_demonstration/<task_name>/
where <task_name> is generated as:
YYYYMMDD_HHMM_<channel>
Each episode is saved as:
episode_0001/rgb/(JPEG frames, e.g.000000.jpg)data.json(per-frame metadata and states/actions)
Typical per-frame fields in data.json include:
idx,rgb,t_img,t_record_msstate_body,action_bodyhand_tracking_left/rightaction_wuji_qpos_target_left/right,state_wuji_hand_left/rightt_action,t_state,t_action_wuji_hand_left/right,t_state_wuji_hand_left/rightbody_zmq,body_zmq_decoded(when channel issonic)