PrimateFace contains data, models, and tutorials for analyzing facial behavior across primates (Parodi et al., 2025).
This codebase enables you to use an off-the-shelf PrimateFace model for tracking facial movements or you can quickly fine-tune a PrimateFace model.
Most of the PrimateFace modules require GPU access. If you don't have access to a GPU, you can still use PrimateFace in Google Colab (see tutorials).
-
Test the Hugging Face demo to get a feel for the capabilities of PrimateFace on your own data.
-
Run through the Google Colab Notebook tutorials to explore several applications of PrimateFace.
-
Clone this repository, install the dependencies, and run through the different modules (e.g., DINOv2, image and video demos, pseudo-labeling GUI, etc.) to fully utilize PrimateFace.
This repository contains the code for PrimateFace, an ecosystem for facilitating cross-species primate face analysis.
|--- dataset # Explore PrimateFace data
|--- demos # Test models on your own data
|--- notebooks # Google Colab notebooks for tutorials
|--- dinov2 # Run and visualize DINOv2 features
|--- docs # Documentation for PrimateFace
|--- evals # Evaluate models across frameworks & datasets
|--- gui # Run pseudo-labeling GUI on your own data
|--- landmark-converter # Train & apply keypoint landmark converters (68 -> 48 kpts)
|--- pyproject.toml
|--- README.md
|--- environment.yml # Unified conda environment for modules
Follow these steps to install PrimateFace:
Step 1: Create conda environment
# Create environment with base dependencies (numpy, opencv, etc.)
conda env create -f environment.yml
conda activate primateface
Step 2: Install PyTorch for your system Check your CUDA version and the corresponding PyTorch version here.
# Install uv for faster package management (if not already installed)
pip install uv
# Check your CUDA version:
nvcc --version
# Choose ONE based on your CUDA version: 11.8, 12.1, or CPU only
# For example, for CUDA 11.8:
uv pip install torch==2.1.0 torchvision==0.16.0 --index-url https://download.pytorch.org/whl/cu118
uv pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
uv pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu
# Verify PyTorch installation
python -c "import torch; print(f'PyTorch {torch.__version__}, CUDA: {torch.cuda.is_available()}')"
Step 3: Install optional modules
# Recommended: Install multiple modules at once (includes testing tools):
uv pip install -e ".[dinov2,gui,dev]"
# Or install individually:
# For DINOv2 feature extraction:
uv pip install -e ".[dinov2]"
# For GUI (includes YOLO/Ultralytics):
uv pip install -e ".[gui]"
# For development/testing tools (pytest, black, etc.):
uv pip install -e ".[dev]"
# For the graph neural network landmark converter (advanced users):
# uv pip install -e ".[landmark_gnn]"
Note: You may see a harmless RequestsDependencyWarning
about urllib3 versions - this can be safely ignored.
Step 4: Install detection and pose estimation frameworks (install only what you need):
- MMDetection/MMPose: See demos/README.md
- DeepLabCut: See evals/dlc/README.md
- SLEAP: See evals/sleap/README.md
If you use PrimateFace in your research, please cite:
Parodi, Felipe, et al. "PrimateFace: A Machine Learning Resource for Automated Face Analysis in Human and Non-human Primates." bioRxiv (2025): 2025-08.
BibTeX:
@article{parodi2025primateface,
title={PrimateFace: A Machine Learning Resource for Automated Face Analysis in Human and Non-human Primates},
author={Parodi, Felipe and Matelsky, Jordan and Lamacchia, Alessandro and Segado, Melanie and Jiang, Yaoguang and Regla-Vargas, Alejandra and Sofi, Liala and Kimock, Clare and Waller, Bridget M and Platt, Michael and others},
journal={bioRxiv},
pages={2025--08},
year={2025},
publisher={Cold Spring Harbor Laboratory}
}
For questions or collaborations, reach out via:
- PrimateFace Email: [email protected]
- Felipe Parodi Email: [email protected]
- Felipe Parodi, University of Pennsylvania
- Jordan Matelsky, University of Pennsylvania; Johns Hopkins University Applied Physics Laboratory
- Alessandro Lamacchia, University of Pennsylvania
- Melanie Segado, University of Pennsylvania
- Yao Jiang, University of Pennsylvania
- Alejandra Regla-Vargas, University of Pennsylvania
- Liala Sofi, University of Pennsylvania
- Clare Kimock, Nottingham Trent University
- Bridget Waller, Nottingham Trent University
- Michael L. Platt*, University of Pennsylvania
- Konrad P. Kording*, University of Pennsylvania; Learning in Machines & Brains, CIFAR
PrimateFace is maintained by the Kording and Platt labs at the University of Pennsylvania.
PrimateFace is released under the MIT License.
We thank the developers of foundational frameworks that enabled this project, including:
Category | Framework/Resource | Link |
---|---|---|
Face Analysis | InsightFace | https://github.com/deepinsight/insightface |
GazeLLE | https://github.com/fkryan/gazelle | |
Comp. Ethology | DeepLabCut | https://github.com/DeepLabCut/DeepLabCut |
SLEAP | https://github.com/murthylab/sleap | |
MotionMapper | https://github.com/gordonberman/MotionMapper | |
General ML/CV | MMDetection | https://github.com/open-mmlab/mmdetection |
MMPose | https://github.com/open-mmlab/mmpose | |
DINOv2 | https://github.com/facebookresearch/dinov2 | |
Hugging Face | https://huggingface.co |