Skip to content

Codebase for "Towards Generalizable Safety in Crowd Navigation via Conformal Uncertainty Handling" [CoRL 2025].

License

Notifications You must be signed in to change notification settings

tasl-lab/GenSafeNav

Repository files navigation

GenSafeNav

This is the codebase for the paper: Towards Generalizable Safety in Crowd Navigation via Conformal Uncertainty Handling.

For more information, please also check:

1.) Project website

2.) Video demos

Abstract

Mobile robots navigating in crowds trained using reinforcement learning are known to suffer performance degradation when faced with out-of-distribution scenarios. We propose that by properly accounting for the uncertainties of pedestrians, a robot can learn safe navigation policies that are robust to distribution shifts. Our method augments agent observations with prediction uncertainty estimates generated by adaptive conformal inference, and it uses these estimates to guide the agent’s behavior through constrained reinforcement learning. The system helps regulate the agent’s actions and enables it to adapt to distribution shifts. In the in-distribution setting, our approach achieves a 96.93% success rate, which is over 8.80% higher than the previous state-of-the-art baselines with over 3.72 times fewer collisions and 2.43 times fewer intrusions into ground-truth human future trajectories. In three out-of-distribution scenarios, our method shows much stronger robustness when facing distribution shifts in velocity variations, policy changes, and transitions from individual to group dynamics. We deploy our method on a real robot, and experiments show that the robot makes safe and robust decisions when interacting with both sparse and dense crowds.

Timeline

08/2025: Training script release.

02/2025: Test & visualization code release.

The code for the ROS2 system demonstrated in our experiments will be made publicly available after proper preparation. Thanks for your attention.

Quick Start

After cloning the project, please:

1.) Install docker on your host machine by

sudo apt install docker.io
sudo apt-get install -y nvidia-docker2
sudo apt-get install nvidia-container-runtime

2.) Pull the base image

docker pull pytorch/pytorch:2.3.1-cuda12.1-cudnn8-devel

3.) Restart the docker service

sudo systemctl restart docker

4.) Go to your current project folder, and build the docker image:

docker build --build-arg USER_ID=$(id -u) --build-arg GROUP_ID=$(id -g) -t gen_safe_py10:latest .

5.) Run the docker image by:

docker run --runtime=nvidia -it -p 12345:8888 -v /home/docker_share:/home/ -v $(pwd):/workspace gen_safe_py10:latest /bin/bash

6.) Test the pretrained models with python test.py

7.) Visualize the results generated by the pretrained models with python visualize.py

8.) Tune the parameters in arguments.py and configs/config.py and train your own model with python train.py

Note: if you face the problem of "Failed to initialize NVML: Unknown Error" inside the container you can refer to this thread.

Components

1.) baselines: Common tools.

2.) crowd_nav: Configurations for new training and policy behaviors.

3.) crowd_sim: Environments for CrowdNav, implemented hierarchically:
CrowdSimCrowdSimVarNumCrowdSimPredCrowdSimPredRealGST.
Includes different agent implementations.

4.) dt_aci: Python implementations of DtACI.

5.) gst_updated: Learning-based prediction model GST.

6.) Python-RVO2: ORCA package for collision avoidance.

7.) rl: Networks and algorithms for PPO/PPO Lagrangian.

8.) trained_models: Pretrained models.

Results

In the episode with the same initialization, the four policies included in trained_models generate very different movements.

1.) Ours:

2.) CrowdNav++:

3.) ORCA:

4.) Social Force:

You can also generate some other visualizations by yourself by running python visualize.py!

Citation

If you find our work useful, please consider citing our paper:

@inproceedings{yao2025towards,
    title={Towards Generalizable Safety in Crowd Navigation via Conformal Uncertainty Handling},
    author={Yao, Jianpeng and Zhang, Xiaopan and Xia, Yu and Roy-Chowdhury, Amit K and Li, Jiachen},
    booktitle={Conference on Robot Learning (CoRL)},
    year={2025}
}

Acknowledgement

We sincerely thank the researchers and developers for CrowdNav, CrowdNav++, Gumble Social Transformer, DtACI, and OmniSafe for their amazing work.

About

Codebase for "Towards Generalizable Safety in Crowd Navigation via Conformal Uncertainty Handling" [CoRL 2025].

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published