This is the codebase for the paper: Towards Generalizable Safety in Crowd Navigation via Conformal Uncertainty Handling.
For more information, please also check:
1.) Project website
2.) Video demos
Mobile robots navigating in crowds trained using reinforcement learning are known to suffer performance degradation when faced with out-of-distribution scenarios. We propose that by properly accounting for the uncertainties of pedestrians, a robot can learn safe navigation policies that are robust to distribution shifts. Our method augments agent observations with prediction uncertainty estimates generated by adaptive conformal inference, and it uses these estimates to guide the agent’s behavior through constrained reinforcement learning. The system helps regulate the agent’s actions and enables it to adapt to distribution shifts. In the in-distribution setting, our approach achieves a 96.93% success rate, which is over 8.80% higher than the previous state-of-the-art baselines with over 3.72 times fewer collisions and 2.43 times fewer intrusions into ground-truth human future trajectories. In three out-of-distribution scenarios, our method shows much stronger robustness when facing distribution shifts in velocity variations, policy changes, and transitions from individual to group dynamics. We deploy our method on a real robot, and experiments show that the robot makes safe and robust decisions when interacting with both sparse and dense crowds.
08/2025: Training script release.
02/2025: Test & visualization code release.
The code for the ROS2 system demonstrated in our experiments will be made publicly available after proper preparation. Thanks for your attention.
After cloning the project, please:
1.) Install docker on your host machine by
sudo apt install docker.io
sudo apt-get install -y nvidia-docker2
sudo apt-get install nvidia-container-runtime
2.) Pull the base image
docker pull pytorch/pytorch:2.3.1-cuda12.1-cudnn8-devel
3.) Restart the docker service
sudo systemctl restart docker
4.) Go to your current project folder, and build the docker image:
docker build --build-arg USER_ID=$(id -u) --build-arg GROUP_ID=$(id -g) -t gen_safe_py10:latest .
5.) Run the docker image by:
docker run --runtime=nvidia -it -p 12345:8888 -v /home/docker_share:/home/ -v $(pwd):/workspace gen_safe_py10:latest /bin/bash
6.) Test the pretrained models with python test.py
7.) Visualize the results generated by the pretrained models with python visualize.py
8.) Tune the parameters in arguments.py
and configs/config.py
and train your own model with python train.py
Note: if you face the problem of "Failed to initialize NVML: Unknown Error" inside the container you can refer to this thread.
1.) baselines
: Common tools.
2.) crowd_nav
: Configurations for new training and policy behaviors.
3.) crowd_sim
: Environments for CrowdNav, implemented hierarchically:
CrowdSim
→ CrowdSimVarNum
→ CrowdSimPred
→ CrowdSimPredRealGST
.
Includes different agent implementations.
4.) dt_aci
: Python implementations of DtACI.
5.) gst_updated
: Learning-based prediction model GST.
6.) Python-RVO2
: ORCA package for collision avoidance.
7.) rl
: Networks and algorithms for PPO/PPO Lagrangian.
8.) trained_models
: Pretrained models.
In the episode with the same initialization, the four policies included in trained_models
generate very different movements.
1.) Ours:
2.) CrowdNav++:
3.) ORCA:
4.) Social Force:
You can also generate some other visualizations by yourself by running python visualize.py
!
If you find our work useful, please consider citing our paper:
@inproceedings{yao2025towards,
title={Towards Generalizable Safety in Crowd Navigation via Conformal Uncertainty Handling},
author={Yao, Jianpeng and Zhang, Xiaopan and Xia, Yu and Roy-Chowdhury, Amit K and Li, Jiachen},
booktitle={Conference on Robot Learning (CoRL)},
year={2025}
}
We sincerely thank the researchers and developers for CrowdNav, CrowdNav++, Gumble Social Transformer, DtACI, and OmniSafe for their amazing work.