TPT-Bench: A Large-Scale, Long-Term and Robot-Egocentric Dataset for Benchmarking Target Person Tracking
Hanjing Ye, Yu Zhan, Weixi Situ, Guangcheng Chen, Jingwen Yu, Ziqi Zhao, Kuanqi Cai, Arash Ajoudani and Hong Zhang
Under Review
arXiv site, video
- (20260126) Fix incorrect archive naming in
panoramic_images(affected:0019.zip–0026.zip). Please re-download the corrected archives if you downloaded them previously. Thanks to Zou Jianan for reporting this issue. - (20250710) Release the TPT-Bench dataset and development tools
One Drive:
- TPT-Bench, password: rcvtptbench
- rosbags (0000-0025), password: rpfrosbag123!
- rosbags (0026-0047), password: rpfrosbag123!
Baidu Yun:
⚠️ Note:0002.bagis missing LiDAR and ZED recordings due to hardware issues.
⭐ Recommended Rosbag Download:
0015.bagis the smallest bag for a simple trial
TPT-Bench/
├── panoramic_images/
│ └── <seq_id>/
│ └── <timestamp>.jpg
├── GTs/
│ └── <seq_id>.json
├── rosbags/
│ └── <seq_id>.bag
├── descriptions/
│ └── <seq_id>.txt
├── quickview_videos/
│ └── <seq_id>.mp4
├── evaluation_results/
│ └── <seq_id>/
│ └── <baseline>.json
├── LICENSE.txtTested in Ubuntu 20.04, Python 3.7.16.
git clone https://github.com/MedlarTea/TPT-BENCH-TOOLS
conda create -n tptbench python==3.7.16
conda activate tptbench
pip install -r requirements.txt
pip install -e .
⚠️ If you encountered rospy problem, you might need to install ros-noetic package and source it.
- Enter script path:
cd tpt_bench_tools- Evaluate baseline results:
python evaluate_baselines.py --dataset_dir /path/to/TPT-Bench- Requires
$dataset_dir/evaluation_results$dataset_dir/GTs
- Convert a sphere image (panoramic image) to a cylinder image (similar to pinhole-camera image)
python sphere_to_cylinder.py --dataset_dir /path/to/TPT-Bench \
--sequence_name 0015 \
--frame_index 20- Requires:
$dataset_dir/panoramic_images/$sequence_name
- Collect rosbag data and store them in the
dataset_dir
python write_bag_to_data.py --dataset_dir /path/to/TPT-Bench \
--sequence_name 0015 \
--zed_path_odom- Requires:
- bag file:
$dataset_dir/rosbags/{$sequence_name}.bag
- bag file:
- Parameters (what data to be saved):
--zed_path_odom: saves/zed2/zed_node/path_odomto$dataset_dir/odometry/$sequence_name--zed_rgb: saveszed_rgb_imageto$dataset_dir/zed_rgb_images/$sequence_name--lidar_points: savesouster_pointsto$dataset_dir/lidar_points/$sequence_name
- Project LiDAR points to the panoramic images or Zed RGB images
python visualize_depthmap.py --dataset_dir /path/to/TPT-Bench \
--sequence_name 0015 \
--frame_index 20 \
--camera_type theta_camera- Requires:
- LiDAR data:
$dataset_dir/lidar_points/$sequence_name - Images: either
$dataset_dir/panoramic_images/$sequence_nameor$dataset_dir/zed_rgb_images/$sequence_name
- LiDAR data:
- Parameters:
--camera_type:theta_cameraorzed_camera--frame_index: visualized frame index--save_dir: optionally save projected images to$dataset_dir/theta_projected_images/$sequence_nameor$dataset_dir/zed_projected_images/$sequence_name
- Tracking the target person on the ground plane
python tracking_on_ground_plane.py --dataset_dir /path/to/TPT-Bench \
--sequence_name 0015- Requires:
$dataset_dir/panoramic_images/$sequence_name$dataset_dir/lidar_points/$sequence_name
To evaluate your tracking results, please follow the JSON format provided in $dataset_dir/evaluation_results and use the evaluation script evaluate_baselines.py. A valid output JSON file should look like this:
{
"1727602424143298027": {
"target_info": [
881,
255,
155,
425,
0.9772515464890091
]
},
...
}1727602424143298027--- Timestamp of the panoramic frame.target_info--- A list in the format[u_top_left, v_top_left, width, height, target_confidence], representing the bounding box and confidence score of the target. Use[0, 0, 0, 0, -1]to indicate that the target is not present in the frame.
We sincerely thank the contributors of the following open-source tools for their excellent work:
Data citation:
@dataset{tpt2025ye,
author = {Ye, Hanjing},
title = {TPT-Bench: A Large-Scale, Long-Term and Robot-Egocentric Dataset for Benchmarking Target Person Tracking},
month = nov,
year = 2025,
publisher = {Zenodo},
version = {v1.0},
doi = {10.5281/zenodo.17718188},
url = {https://doi.org/10.5281/zenodo.17718188},
}
Paper citation:
@article{ye2025tpt,
title={TPT-Bench: A Large-Scale, Long-Term and Robot-Egocentric Dataset for Benchmarking Target Person Tracking},
author={Ye, Hanjing and Zhan, Yu and Situ, Weixi and Chen, Guangcheng and Yu, Jingwen and Zhao, Ziqi and Cai, Kuanqi and Arash, Ajoudani and Zhang, Hong},
journal={arXiv preprint arXiv:2505.07446},
year={2025}
}
