DeePixBiS : https://github.com/Saiyam26/Face-Anti-Spoofing-using-DeePixBiS
WENDGOUNDI : https://github.com/WENDGOUNDI/face_anti_spoofing_yolov8
MW3_antispoof : https://github.com/kprokofi/light-weight-face-anti-spoofing
Facial‐recognition systems can be tricked by things like printed photos or video replays, putting security at risk. EasyShield v2.5 tackles this head-on, outperforming other leading anti-spoofing methods in tests—catching fake faces with over 92% accuracy and keeping its error rate low.
| Metric | EasyShield v2.5 | MN3_antispoof | DeePixBiS | WENDGOUNDI |
|---|---|---|---|---|
| Accuracy | 92.30% | 66.60% | 54.20% | 58.33% |
| Precision | 88.32% | 81.74% | 52.50% | 55.83% |
| Recall | 97.50% | 42.75% | 88.30% | 79.75% |
| F1 Score | 92.68% | 56.14% | 65.85% | 65.68% |
| AUC | 98.61% | 81.11% | 40.29% | 61.79% |
| EER | 6.25% | 27.55% | 58.37% | 38.65% |
| APCER | 6.25% | 9.55% | 79.90% | 57.05% |
| BPCER | 2.50% | 57.25% | 11.70% | 20.25% |
| ACER | 4.38% | 33.40% | 45.80% | 38.65% |
| Avg. Inference Time (ms) | 75.47 | 9.36 | 59.20 | 6.93 |
Table 1: Performance Comparison of Anti-Spoofing Models on 8000-Image Test Dataset. EasyShield v2.5 shows superior overall performance.
- Lightweight & Fast: Runs on edge devices in just ~75 ms per inference.
- High Accuracy: Detects print and replay attacks with 92.30% accuracy.
- Robust Classification: Clearly labels faces as “Real” or “Fake”.
- All-in-One Toolkit: Includes data prep, augmentation, training, and live testing tools.
- Efficient Architecture: Built on YOLOv12 nano for optimal performance.
EasyShield’s workflow handles everything from gathering and cleaning face images to training its lightweight detection model and rolling it out for live use. User-friendly Python tools (with GUIs when needed) guide you through each stage—from grabbing faces out of photos or videos and beefing up your dataset to building the model and checking its accuracy—so you can go from raw footage to real-time spoof protection in one streamlined process.
*Figure 4: Overview of the EasyShield system pipeline from data collection to inference.*-
Face Extractor Tool: Extracts faces from images/videos using MTCNN, crops with margin, resizes to 640x640, GUI included.
-
Image Augmentor Tool: Applies data augmentations like rotation, blur, brightness, contrast, and flipping to boost dataset diversity.
-
Image Filtering Tool: Uses MTCNN to flag/remove low-quality or misaligned face images, helping clean the dataset manually or automatically.
-
Dataset Preparation Script: Sorts images into YOLO format with train/valid/test folders and generates dataset.yaml for YOLO training.
-
Easy Spoof Trainer Tool: Trains YOLOv12 nano model on prepared dataset with support for tuning hyperparameters and outputs training metrics and best weights.
-
Model Testing Tool: Loads trained model in a GUI for real-time or file-based testing, showing Real/Fake predictions with confidence scores.
This section guides you through setting up the EasyShield development environment on both Windows and Linux-based systems, including edge devices.
Setting up the EasyShield development environment on Windows requires careful attention to specific versions of Python, NVIDIA CUDA, NVIDIA cuDNN, and several Python packages.
Prerequisites:
- NVIDIA GPU with CUDA support.
- Administrator privileges for some installation steps.
Installation Steps:
-
Install NVIDIA CUDA Toolkit:
- Download and install CUDA Toolkit 11.8 from the NVIDIA CUDA Toolkit Archive.
-
Install NVIDIA cuDNN:
- Download cuDNN v8.9.6 for CUDA 11.x from the NVIDIA cuDNN Archive. You will need to join the NVIDIA Developer Program (free) to download.
-
Create a Python Virtual Environment (Highly Recommended):
- Open a Command Prompt or PowerShell.
- Navigate to your project directory (e.g.,
cd path\to\EasyShield-Anti-Spoofing-AI-Model). - Create the virtual environment:
python -m venv easyshield_env
- Activate the virtual environment:
Your command prompt should now be prefixed with
easyshield_env\Scripts\activate
(easyshield_env).
-
Install Python Dependencies:
- Ensure pip is up-to-date:
python -m pip install --upgrade pip
- Install PyTorch (ensure this version matches your CUDA 11.8 setup):
pip install ultralytics opencv-python numpy PyQt5 torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu118
- Install FaceNet-PyTorch (without its own PyTorch dependency, as it's already installed):
pip install facenet-pytorch --no-deps
- Install all other dependencies from the
requirements.txtfile:pip install -r requirements.txt
- Ensure pip is up-to-date:
Setting up EasyShield on Linux or an edge device (e.g., NVIDIA Jetson, Raspberry Pi - though RPi performance will be limited) involves similar principles but different commands and package considerations.
Prerequisites:
- Python 3.8+ (Python 3.10 recommended).
- For GPU acceleration on devices like NVIDIA Jetson, ensure NVIDIA JetPack SDK is installed, which includes CUDA, cuDNN, and TensorRT.
Installation Steps:
-
Install Python (if needed):
- Most Linux distributions come with Python 3. Check with
python3 --version. - If you need a specific version or it's missing, install it using your distribution's package manager (e.g.,
sudo apt update && sudo apt install python3.10 python3.10-venv).
- Most Linux distributions come with Python 3. Check with
-
Create a Python Virtual Environment (Highly Recommended):
- Open a terminal.
- Navigate to your project directory.
- Create the virtual environment:
python3 -m venv easyshield_env
- Activate the virtual environment:
Your terminal prompt should now be prefixed with
source easyshield_env/bin/activate(easyshield_env).
-
Install Python Dependencies:
- Ensure pip is up-to-date:
python -m pip install --upgrade pip
- Install remaining dependencies from
requirements_linux.txt: This file should contain packages specific to EasyShield that are not covered by the above, or versions pinned for Linux that differ from the mainrequirements.txt.pip install -r requirements_linux.txt
- Ensure pip is up-to-date:
The creation of a high-quality dataset is fundamental to the success and robustness of the EasyShield anti-spoofing model. The workflow involves a sequence of steps, each facilitated by specialized tools provided within this project:
-
Data Collection (Manual): Collect diverse real and spoofed face data including print and replay attacks under varied conditions.
-
Face Extraction: Use videos_and_images_face_extractor.py to crop and resize detected faces from collected media into 640x640 images.
-
Image Augmentation: Use image_augmentor.py to apply transformations like rotation, blur, and brightness to increase dataset diversity.
-
Image Filtering: Use image_filtring.py with MTCNN to review and clean the dataset by removing misaligned or irrelevant face crops.
-
Dataset Preparation: Run prepare_data.py to split, organize, and format the dataset into YOLO structure and generate dataset.yaml.
Figure 6: Example of the dataset structure after preparation, showing 'real' and 'fake' images categorized for training and validation (actual structure may vary based on prepare_data.py script).
Training a new EasyShield anti-spoofing model involves using the Easy_Spoof_Trainer.py tool, which leverages the Ultralytics YOLOv12 framework.
Prerequisites:
- A fully prepared dataset in YOLO format, created using the Dataset Workflow. This includes a
dataset.yamlfile and the corresponding image/label directories. - The EasyShield environment correctly set up with all dependencies installed as per Section 5. Setup and Installation.
Steps to Train:
-
Navigate to the Trainer Tool Directory: Open your terminal or command prompt, activate your virtual environment (
easyshield_env), and change to the directory containing the trainer script:cd path/to/EasyShield-Anti-Spoofing-AI-Model/"Trained TOOL for YOLO/"
-
Run the Training Script: Execute
Easy_Spoof_Trainer.pywith appropriate command-line arguments. Here's a basic example:python Easy_Spoof_Trainer.py
-
Monitor Training and Results:
- The training progress will be displayed in the terminal, including metrics like loss, accuracy, precision, and recall for each epoch.
- Upon completion, the trained model (usually
best.ptandlast.pt), along with various performance charts (e.g., confusion matrix, ROC curve) and logs, will be saved in theruns/train/YourExperimentName/directory. Thebest.ptmodel is typically the one with the best validation performance.
-
Ensure you have a trained model file like
best.ptand the EasyShield environment properly set up. -
Navigate to the testing script directory based on your OS:
- For Windows:
cd path/to/EasyShield-Anti-Spoofing-AI-Model/"Testing Code (windows)/"
- For Linux:
cd path/to/EasyShield-Anti-Spoofing-AI-Model/"Testing Code (Linux)/"
- For Windows:
-
Run the testing script:
python test_model.py
The EasyShield project is organized as follows. This structure ensures that all code, tools, datasets, and documentation are easily accessible.
EasyShield-Anti-Spoofing-AI-Model/
├── Dataset_Demo_Exemple/ # Example dataset with 'fake' and 'real' subdirectories
│ ├── fake/
│ │ └── (image files: .jpg, .png, etc.)
│ └── real/
│ └── (image files: .jpg, .png, etc.)
├── EasyShield weights/ # Contains pre-trained model weights
│ ├── EasyShield V2.5 - nano (well mixed dataset) 120 epochs 16 batch/ # Latest recommended version
│ │ └── best.pt # Example of a trained model file
│ └── old versions (less acurate - less performence)/ # Older model versions for reference
│ └── (similar structure with .pt files)
├── Original YOLO v12 Models For training/ # Base YOLO models (Note: Project primarily uses YOLOv12, this dir might be for experimentation or legacy)
│ └── (YOLO model files like .pt or .yaml)
├── Testing Code (Linux)/ # Model testing scripts optimized or specific to Linux environments
│ └── test_model.py # GUI-based model testing script
├── Testing Code (windows)/ # Model testing scripts optimized or specific to Windows environments
│ └── test_model.py # GUI-based model testing script
├── Trained TOOL for YOLO/ # Core training script for the EasyShield model
│ └── Easy_Spoof_Trainer.py # Python script to train the YOLOv12 model
├── dataset preparing tools/ # Suite of tools for dataset creation, augmentation, and management
│ ├── data_collection.py # (Potentially) Script for initial data gathering or organization (if used)
│ ├── image_augmentor.py # Tool for applying data augmentation techniques
│ ├── image_filtring.py # Tool for manual dataset review and filtering (with MTCNN assistance)
│ ├── prepare_data.py # Script to format the dataset for YOLO training and generate dataset.yaml
│ └── videos_and_images_face_extractor.py # Tool to extract faces from videos and images
├── README.md # This comprehensive documentation file
├── requirements.txt # Python package dependencies for Windows and general use
└── requirements_linux.txt # Python package dependencies tailored for Linux/edge device setups
While EasyShield offers significant advancements in face anti-spoofing, it is crucial to consider the ethical implications and practical limitations inherent in such technology. Responsible development and deployment are paramount.
-
Bias and Fairness:
- Deep learning models can inadvertently learn and perpetuate biases present in their training data. If the dataset is not sufficiently diverse (e.g., in terms of age, gender, ethnicity, skin tone, and environmental conditions like lighting), the model's performance may vary significantly across different user groups. This can lead to some groups being more susceptible to false positives or false negatives.
- Mitigation: Continuous efforts are needed to curate balanced, representative datasets and to test for and mitigate bias in model performance.
-
Privacy:
- The collection, storage, and use of facial data are subject to stringent privacy regulations (e.g., GDPR, CCPA) and ethical guidelines. Facial images are considered sensitive personal information.
- Mitigation: Systems using EasyShield must ensure user consent is obtained where required, data is anonymized or pseudonymized if possible, stored securely, and processed only for the intended purposes. Transparency with users about data handling practices is essential.
EasyShield is a lightweight face anti-spoofing system designed for edge devices. It combines a compact YOLOv12 nano model with a full pipeline for data preparation, training, and testing. It outperforms many existing methods in key performance areas. The included tools cover the whole process from data collection to real-time evaluation, making it easy to develop and improve the system.
The development of EasyShield and the information presented in this document draw upon knowledge from various sources in the fields of machine learning, computer vision, and face anti-spoofing. Key resources and technologies include:
- Ultralytics YOLOv12: The core object detection and classification framework used. https://github.com/ultralytics/ultralytics
- PyTorch: The deep learning framework used by YOLOv12 and for model development. https://pytorch.org/
- MTCNN (Multi-task Cascaded Convolutional Networks): Often used for face detection. A popular implementation can be found at https://github.com/ipazc/mtcnn (though the specific face detector in the tools may vary).
- Face Recognition and Anti-Spoofing Research:
- Rosebrock, A. (2021). PyImageSearch Gurus course. PyImageSearch. (General computer vision and deep learning resource).
- CelebA-Spoof Dataset: A large-scale face anti-spoofing dataset. Zhang, Y., et al. (2020). CelebA-Spoof: Large-Scale CelebFace Anti-Spoofing Dataset with Rich Annotations. https://github.com/Davidzhangyuanhan/CelebA-Spoof



