UniFace is a lightweight, production-ready face analysis library built on ONNX Runtime. It provides high-performance face detection, recognition, landmark detection, face parsing, gaze estimation, and attribute analysis with hardware acceleration support across platforms.
- Face Detection — RetinaFace, SCRFD, YOLOv5-Face, and YOLOv8-Face with 5-point landmarks
- Face Recognition — ArcFace, MobileFace, and SphereFace embeddings
- Face Tracking — Multi-object tracking with BYTETracker for persistent IDs across video frames
- Facial Landmarks — 106-point landmark localization module (separate from 5-point detector landmarks)
- Face Parsing — BiSeNet semantic segmentation (19 classes), XSeg face masking
- Gaze Estimation — Real-time gaze direction with MobileGaze
- Attribute Analysis — Age, gender, race (FairFace), and emotion
- Vector Indexing — FAISS-backed embedding store for fast multi-identity search
- Anti-Spoofing — Face liveness detection with MiniFASNet
- Face Anonymization — 5 blur methods for privacy protection
- Hardware Acceleration — ARM64 (Apple Silicon), CUDA (NVIDIA), CPU
Standard installation
pip install unifaceGPU support (CUDA)
pip install uniface[gpu]From source (latest version)
git clone https://github.com/yakhyo/uniface.git
cd uniface && pip install -e .FAISS vector indexing
pip install faiss-cpu # or faiss-gpu for CUDAOptional dependencies
- Emotion model uses TorchScript and requires
torch:pip install torch(choose the correct build for your OS/CUDA) - YOLOv5-Face and YOLOv8-Face support faster NMS with
torchvision:pip install torch torchvisionthen usenms_mode='torchvision'
Models are downloaded automatically on first use and verified via SHA-256.
Default cache location: ~/.uniface/models
Override with the programmatic API or environment variable:
from uniface.model_store import get_cache_dir, set_cache_dir
set_cache_dir('/data/models')
print(get_cache_dir()) # /data/modelsexport UNIFACE_CACHE_DIR=/data/modelsimport cv2
from uniface.detection import RetinaFace
detector = RetinaFace()
image = cv2.imread("photo.jpg")
if image is None:
raise ValueError("Failed to load image. Check the path to 'photo.jpg'.")
faces = detector.detect(image)
for face in faces:
print(f"Confidence: {face.confidence:.2f}")
print(f"BBox: {face.bbox}")
print(f"Landmarks: {face.landmarks.shape}")import cv2
from uniface.analyzer import FaceAnalyzer
from uniface.detection import RetinaFace
from uniface.recognition import ArcFace
detector = RetinaFace()
recognizer = ArcFace()
analyzer = FaceAnalyzer(detector, recognizer=recognizer)
image = cv2.imread("photo.jpg")
if image is None:
raise ValueError("Failed to load image. Check the path to 'photo.jpg'.")
faces = analyzer.analyze(image)
for face in faces:
print(face.bbox, face.embedding.shape if face.embedding is not None else None)from uniface.detection import RetinaFace
# Force CPU-only inference
detector = RetinaFace(providers=["CPUExecutionProvider"])See more in the docs: https://yakhyo.github.io/uniface/concepts/execution-providers/
Full documentation: https://yakhyo.github.io/uniface/
| Resource | Description |
|---|---|
| Quickstart | Get up and running in 5 minutes |
| Model Zoo | All models, benchmarks, and selection guide |
| API Reference | Detailed module documentation |
| Tutorials | Step-by-step workflow examples |
| Guides | Architecture and design principles |
| Datasets | Training data and evaluation benchmarks |
| Task | Training Dataset | Models |
|---|---|---|
| Detection | WIDER FACE | RetinaFace, SCRFD, YOLOv5-Face, YOLOv8-Face |
| Recognition | MS1MV2 | MobileFace, SphereFace |
| Recognition | WebFace600K | ArcFace |
| Recognition | WebFace4M / 12M | AdaFace |
| Gaze | Gaze360 | MobileGaze |
| Parsing | CelebAMask-HQ | BiSeNet |
| Attributes | CelebA, FairFace, AffectNet | AgeGender, FairFace, Emotion |
See Datasets documentation for download links, benchmarks, and details.
| Example | Colab | Description |
|---|---|---|
| 01_face_detection.ipynb | Face detection and landmarks | |
| 02_face_alignment.ipynb | Face alignment for recognition | |
| 03_face_verification.ipynb | Compare faces for identity | |
| 04_face_search.ipynb | Find a person in group photos | |
| 05_face_analyzer.ipynb | All-in-one analysis | |
| 06_face_parsing.ipynb | Semantic face segmentation | |
| 07_face_anonymization.ipynb | Privacy-preserving blur | |
| 08_gaze_estimation.ipynb | Gaze direction estimation | |
| 09_face_segmentation.ipynb | Face segmentation with XSeg | |
| 10_face_vector_store.ipynb | FAISS-backed face database |
UniFace is MIT-licensed, but several pretrained models carry their own licenses. Review: https://yakhyo.github.io/uniface/license-attribution/
Notable examples:
- YOLOv5-Face and YOLOv8-Face weights are GPL-3.0
- FairFace weights are CC BY 4.0
If you plan commercial use, verify model license compatibility.
| Feature | Repository | Training | Description |
|---|---|---|---|
| Detection | retinaface-pytorch | ✓ | RetinaFace PyTorch Training & Export |
| Detection | yolov5-face-onnx-inference | - | YOLOv5-Face ONNX Inference |
| Detection | yolov8-face-onnx-inference | - | YOLOv8-Face ONNX Inference |
| Tracking | bytetrack-tracker | - | BYTETracker Multi-Object Tracking |
| Recognition | face-recognition | ✓ | MobileFace, SphereFace Training |
| Parsing | face-parsing | ✓ | BiSeNet Face Parsing |
| Parsing | face-segmentation | - | XSeg Face Segmentation |
| Gaze | gaze-estimation | ✓ | MobileGaze Training |
| Anti-Spoofing | face-anti-spoofing | - | MiniFASNet Inference |
| Attributes | fairface-onnx | - | FairFace ONNX Inference |
*SCRFD and ArcFace models are from InsightFace.
Contributions are welcome. Please see CONTRIBUTING.md.
If you find this project useful, consider giving it a ⭐ on GitHub — it helps others discover it!
Questions or feedback:
- Discord: https://discord.gg/wdzrjr7R5j
- GitHub Issues: https://github.com/yakhyo/uniface/issues
- DeepWiki Q&A: https://deepwiki.com/yakhyo/uniface
This project is licensed under the MIT License.

