Skip to content

KordingLab/PrimateFace

Repository files navigation

PrimateFace Logo

PrimateFace: A Machine Learning Resource for Automated Primate Face Analysis

bioRxiv    Project Page    Documentation    Hugging Face Spaces

PrimateFace contains data, models, and tutorials for analyzing facial behavior across primates (Parodi et al., 2025).

This codebase enables you to use an off-the-shelf PrimateFace model for tracking facial movements or you can quickly fine-tune a PrimateFace model.

Most of the PrimateFace modules require GPU access. If you don't have access to a GPU, you can still use PrimateFace in Google Colab (see tutorials).

PrimateFace demos

Quick Start

  1. Test the Hugging Face demo to get a feel for the capabilities of PrimateFace on your own data.

  2. Run through the Google Colab Notebook tutorials to explore several applications of PrimateFace.

  3. Clone this repository, install the dependencies, and run through the different modules (e.g., DINOv2, image and video demos, pseudo-labeling GUI, etc.) to fully utilize PrimateFace.

Structure

This repository contains the code for PrimateFace, an ecosystem for facilitating cross-species primate face analysis.

|--- dataset            # Explore PrimateFace data
|--- demos              # Test models on your own data
   |--- notebooks       # Google Colab notebooks for tutorials
|--- dinov2             # Run and visualize DINOv2 features
|--- docs               # Documentation for PrimateFace
|--- evals              # Evaluate models across frameworks & datasets
|--- gui                # Run pseudo-labeling GUI on your own data
|--- landmark-converter # Train & apply keypoint landmark converters (68 -> 48 kpts)
|--- pyproject.toml
|--- README.md
|--- environment.yml    # Unified conda environment for modules

Installation

Follow these steps to install PrimateFace:

Step 1: Create conda environment

# Create environment with base dependencies (numpy, opencv, etc.)
conda env create -f environment.yml
conda activate primateface

Step 2: Install PyTorch for your system Check your CUDA version and the corresponding PyTorch version here.

# Install uv for faster package management (if not already installed)
pip install uv

# Check your CUDA version:
nvcc --version

# Choose ONE based on your CUDA version: 11.8, 12.1, or CPU only
# For example, for CUDA 11.8:
uv pip install torch==2.1.0 torchvision==0.16.0 --index-url https://download.pytorch.org/whl/cu118
uv pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
uv pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu

# Verify PyTorch installation
python -c "import torch; print(f'PyTorch {torch.__version__}, CUDA: {torch.cuda.is_available()}')"

Step 3: Install optional modules

# Recommended: Install multiple modules at once (includes testing tools):
uv pip install -e ".[dinov2,gui,dev]"

# Or install individually:
# For DINOv2 feature extraction:
uv pip install -e ".[dinov2]"

# For GUI (includes YOLO/Ultralytics):
uv pip install -e ".[gui]"

# For development/testing tools (pytest, black, etc.):
uv pip install -e ".[dev]"

# For the graph neural network landmark converter (advanced users):
# uv pip install -e ".[landmark_gnn]"

Note: You may see a harmless RequestsDependencyWarning about urllib3 versions - this can be safely ignored.

Step 4: Install detection and pose estimation frameworks (install only what you need):

Links

Tutorial Open in Colab
1. Lemur Face Visibility Time-Stamping Open
2. Rapid Macaque Face Recognition Open
4. Human Infant Social Gaze Tracking Open
3. Howler Vocal-Motor Coupling Coming soon
5. Data-Driven Discovery of Facial Actions Coming soon
6. Cross-Subject Neural Decoding of Facial Actions Coming soon

References

If you use PrimateFace in your research, please cite:

Parodi, Felipe, et al. "PrimateFace: A Machine Learning Resource for Automated Face Analysis in Human and Non-human Primates." bioRxiv (2025): 2025-08.

BibTeX:

@article{parodi2025primateface,
title={PrimateFace: A Machine Learning Resource for Automated Face Analysis in Human and Non-human Primates},
author={Parodi, Felipe and Matelsky, Jordan and Lamacchia, Alessandro and Segado, Melanie and Jiang, Yaoguang and Regla-Vargas, Alejandra and Sofi, Liala and Kimock, Clare and Waller, Bridget M and Platt, Michael and others},
journal={bioRxiv},
pages={2025--08},
year={2025},
publisher={Cold Spring Harbor Laboratory}
}

Contact

For questions or collaborations, reach out via:

Contributors

  • Felipe Parodi, University of Pennsylvania
  • Jordan Matelsky, University of Pennsylvania; Johns Hopkins University Applied Physics Laboratory
  • Alessandro Lamacchia, University of Pennsylvania
  • Melanie Segado, University of Pennsylvania
  • Yao Jiang, University of Pennsylvania
  • Alejandra Regla-Vargas, University of Pennsylvania
  • Liala Sofi, University of Pennsylvania
  • Clare Kimock, Nottingham Trent University
  • Bridget Waller, Nottingham Trent University
  • Michael L. Platt*, University of Pennsylvania
  • Konrad P. Kording*, University of Pennsylvania; Learning in Machines & Brains, CIFAR

PrimateFace is maintained by the Kording and Platt labs at the University of Pennsylvania.

License

PrimateFace is released under the MIT License.

Acknowledgements

We thank the developers of foundational frameworks that enabled this project, including:

Category Framework/Resource Link
Face Analysis InsightFace https://github.com/deepinsight/insightface
GazeLLE https://github.com/fkryan/gazelle
Comp. Ethology DeepLabCut https://github.com/DeepLabCut/DeepLabCut
SLEAP https://github.com/murthylab/sleap
MotionMapper https://github.com/gordonberman/MotionMapper
General ML/CV MMDetection https://github.com/open-mmlab/mmdetection
MMPose https://github.com/open-mmlab/mmpose
DINOv2 https://github.com/facebookresearch/dinov2
Hugging Face https://huggingface.co

Releases

No releases published

Packages

No packages published

Languages