- summary table and plots:
python -m trajnetplusplustools.summarize <dataset_files> - plot sample trajectories:
python -m trajnetplusplustools.trajectories <dataset_file> - visualize interactions:
python -m trajnetplusplustools.visualize_type <dataset_file> - obtain distribution of trajectory types:
python -m trajnetplusplustools.dataset_stats <dataset_file>
trajnetplusplustools.Reader: class to read the dataset_filetrajnetplusplustools.show: module containing contexts for visualizingrowsandpathstrajnetplusplustools.writers: write a trajnet dataset filetrajnetplusplustools.metrics: contains unimodal metrics:average_l2(), final_l2() and collision()and multimodal metrics:topk() and nll()implementations
Datasets are split into train, val and test set.
Every line is a self contained JSON string (ndJSON).
Scene:
{"scene": {"id": 266, "p": 254, "s": 10238, "e": 10358, "fps": 2.5, "tag": 2}}Track:
{"track": {"f": 10238, "p": 248, "x": 13.2, "y": 5.85}}with:
id: scene idp: pedestrian ids,e: start and end frame idfps: frame ratetag: trajectory typef: frame idx,y: x- and y-coordinate in meterspred_number: (optional) prediction number for multiple output predictionsscene_id: (optional) corresponding scene_id for multiple output predictions
Frame numbers are not recomputed. Rows are resampled to about 2.5 rows per second.
pylint trajnetplusplustools
python -m pytest
# optional: mypy trajnetplusplustools --disallow-untyped-defsbiwi_hotel:
|
|
crowds_students001:
|
|
crowds_students003:
|
|
crowds_zara02:
|
|
crowds_zara03:
|
|
dukemtmc:
|
|
syi:
|
|
wildtrack:
|
|
leader_follower:
|
|
collision_avoidance:
|
|
group:
|
|
others:
|
|
If you find this code useful in your research then please cite
@inproceedings{Kothari2020HumanTF,
title={Human Trajectory Forecasting in Crowds: A Deep Learning Perspective},
author={Parth Kothari and Sven Kreiss and Alexandre Alahi},
year={2020}
}























