Skip to content

shash29-dev/STEP

Repository files navigation

STEP: Simultaneous Tracking and Estimation of Pose for Animals and Humans

Project Page

Requirements

Major requirements

  • pytorch=1.13.1
  • hydra
  • cv2
  • pandas

We exported all packages installed in environment on which this code is tested to run with conda list -e > requirements.txt and generated requirements are available at ./requirements.txt. Please note that these requirements may contain unnecessary libraries unrelated to STEP.

Datasets

APT-36K Dataset

Download the APT-36K dataset and put it under ./datasets/APT-36k folder. Full consolidated json apt36k_annotations.json used for training is available at Link. Organise the data in following architecture

    ├── data_root
    │   └── Sequences
    |   :   └── clips
    |   :       └── im1.png, im1.json
    |   :       :
    |   :        └── imn.png, imn.json
    |   :
    |   └── Sequences
    |       └── clips
    |           └── im1.png, im1.json
    |           :
    |           └── imn.png, im2.json
    ├── apt36k_annotations.json

APT-10K Dataset

Download the APT-10K dataset and put it under ./datasets/APT10k folder. Full consolidated json ap10k.json used for training is available at Link. Follow similar setup as APT-36K, and place .json in root.

CrowdPose Dataset

Download the CrowdPose dataset and put it under ./datasets/CrowdPose folder. Full consolidated json ap10k.json used for training is available at Link. Follow similar setup as APT-36K, and place .json in root.

Training/Evaluation

We provide various settings in run.sh. Detailed configs for all datasets and ablation studies are availabe in ./configs/ folder.

Usage of run.sh

  • python -W ignore main.py config=step_config.yaml "+run_title=$run_title" \ "pipeline.train=True" "data=aptmmpose_nokpts_kptsemb.yaml" "
    • config: which config to use from ./configs/ folder
    • pipeline.train: Sets mode for Inference/Training
    • data: change config accordingly at config/data with appropriate paths pointing to training datasets, Validation datasets.

Pre-trained Weights

Download the trained models and consolidated Jsons from Here

Place downloaded models at location pointed by the key snaps.model_save_dir in config flag; and the .json file as mentioned in Dataset section.

Running STEP on your Videos

Coming Soon

Acknowledgements

Code in this repository uses utilities and implementation style from : MMPose, PyTracking. We thank the authors for their amazing works and sharing the code.

Bib

@article{verma2025step,
  title={STEP: Simultaneous Tracking and Estimation of Pose for Animals and Humans},
  author={Verma, Shashikant and Katti, Harish and Debnath, Soumyaratna and Swamy, Yamuna and Raman, Shanmuganathan},
  journal={arXiv preprint arXiv:2503.13344},
  year={2025}
}

About

STEP: Simultaneous Tracking and Estimation of Pose for Animals and Humans

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published