This repo contains code and dataset for manuscript: Real-Time Hand Pose Estimation Using FMCW Radar on Resource-Limited Edge Devices
Index Terms— Cross-Modal Supervision, Deep Learning, Edge Computing, FMCW Radar, Hand Pose Estimation, TensorRT
| (a) The poses estimated from FMCW Radar signals | (b) The poses estimated from Camera signals |
|---|---|
![]() |
![]() |
Visual comparison between hand key points estimated by proposed RadarNet model and MediaPipe Hands on dark scenes.
Visual comparison between hand key points estimated by MediaPipe Hands and proposed RadarNet model. Blue dots are key points estimated by the visual model while red ones are from RadarNet.
git clone https://github.com/thetuantrinh/UWB-Radar-Hand-Pose-Estimation.git
cd UWB-Radar-Hand-Pose-EstimationThe original project was developed on python 3.9.0. We encourage you to create the same python version for reproduce purposes by creating python3.9 with conda by the following script:
conda create --name HPE python==3.9
conda activate HPEThen install all required libraries:
pip3 install -r requirements.txtscripts/train_hpc.sh to the absolute path of your dataset.
Before running training scripts, first structure the project by executing:
bash scripts/structure_project.shYou can modify training parameters directly in scripts/train_hpc.sh (they are passed to train.py),
or simply start training with:
sbatch scripts/train_hpc.shAfter training, you can evaluate the RadarNet model by running:
bash scripts/eval.shWandb is a great tool for experiment tracking and visualization.
Install with pip:
pip install wandbIf you're training on a machine without internet connection (e.g., an HPC compute node), Wandb will not work online.
To fix this, run in offline mode:
wandb offlineAfter training, sync all locally saved Wandb logs to the cloud:
wandb sync your-local-wandb-log-folder/offline-run*👉 Remeber to replace your-local-wandb-log-folder with the path to your actual Wandb logs directory.
- Training & Testing: AlmaLinux 8.5 (Arctic Sphynx) (NVIDIA DGX A100-SXM4-40GB)
- Inference Deployment: NVIDIA Jetson Nano (Tegra X1, Quad-core ARM Cortex-A57 CPU, 128-core Maxwell GPU)



