Skip to content

DoMaLi94/industrial-image-anomaly-detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Industrial Image Anomaly Detection

A unified comparison framework for zero and few-shot industrial image anomaly detection, enabling systematic evaluation of state-of-the-art models across multiple industrial datasets. This project implements and compares two leading approaches: AnomalyDINO (few-shot anomaly detection via large-scale foundation models) and MuSc (Multi-Scale Contrastive Learning), providing researchers and practitioners with a standardized benchmark for assessing performance in resource-constrained industrial scenarios where labeled anomaly data is scarce or unavailable.

πŸš€ Features

  • Multiple Model Support: Implementations of AnomalyDINO and MuSc anomaly detection models
  • Multiple Dataset Support: MVTec AD, MVTec LOCO AD, BTAD, and ViSA datasets
  • Flexible Configuration: Hydra-based configuration system for easy experimentation
  • MLflow Integration: Comprehensive experiment tracking and model management
  • Various Backbones: Support for DINOv2, CLIP, and other vision transformer backbones
  • Few-Shot Learning: Configurable few-shot learning scenarios (0, 1, 2, 4, 8, 16, full shots)
  • Comprehensive Metrics: Detailed evaluation metrics for different datasets
  • Visualization Tools: Built-in visualization utilities for results analysis

πŸ“‹ Requirements

  • Python 3.10+
  • PyTorch with CUDA support
  • FAISS (for efficient similarity search)
  • MLflow (for experiment tracking)
  • Hydra (for configuration management)

πŸ”§ Installation

1. Clone the Repository

git clone https://github.com/your-username/industrial-image-anomaly-detection.git
cd industrial-image-anomaly-detection

2. Setup Conda Environment

conda env update --prefix ./.conda --file environment.yaml --prune
conda activate ./.conda

3. Dataset Setup

Download the required datasets:

Update the dataset paths in the configuration files under conf/dataset/.

🎯 Usage

Basic Training and Evaluation

Run the main script with default configuration:

python main.py

Custom Configuration

You can override any configuration parameter:

# Change model and dataset
python main.py model=musc dataset=mvtec_ad

# Modify few-shot settings
python main.py shots=4 seed=42

# Enable/disable MLflow tracking
python main.py mlflow_enable=false

Configuration Options

Models

  • anomalydino: AnomalyDINO model with DINOv2 backbone
  • musc: MuSc model with CLIP backbone

Datasets

  • mvtec_ad: MVTec Anomaly Detection dataset
  • mvtec_loco_ad: MVTec LOCO AD dataset (logical and structural)
  • btad: BTAD dataset
  • visa: ViSA dataset

Few-Shot Learning

  • shots: Number of reference images (0, 1, 2, 4, 8, 16, or "full")
  • seed: Random seed for reproducibility
  • sampler_type: Sampling strategy ("musc" for random, "anomalydino" for sequence)

πŸ“Š Supported Datasets

Dataset Categories Image Types Anomaly Types
MVTec AD 15 categories Industrial objects/textures Defects, damages
MVTec LOCO AD 5 categories Industrial objects Logical/structural anomalies
BTAD 3 categories Industrial products Surface defects
ViSA 12 categories Industrial objects Various anomalies

πŸ—οΈ Architecture

Models

AnomalyDINO

  • Backbone: DINOv2 Vision Transformer
  • Method: Feature extraction + k-NN similarity search

MuSc (Multi-Scale Contrastive Learning)

  • Backbone: CLIP Vision Transformer
  • Method: Multi-scale feature extraction with contrastive learning
  • Components: LNAMD, MSM, RsCIN, MSM+

Project Structure

β”œβ”€β”€ conf/                   # Hydra configuration files
β”‚   β”œβ”€β”€ config.yaml         # Main configuration
β”‚   β”œβ”€β”€ dataset/            # Dataset configurations
β”‚   └── model/              # Model configurations
β”œβ”€β”€ datasets/               # Dataset implementations
β”œβ”€β”€ metrics/                # Metrics implementations for each dataset
β”œβ”€β”€ models/                 # Model implementations
β”‚   β”œβ”€β”€ anomalydino/        # AnomalyDINO implementation
β”‚   β”œβ”€β”€ musc/               # MuSc implementation
β”‚   └── backbone/           # Backbone implementations
|── notebooks/              # Jupyter notebooks for exploration and image creation
β”œβ”€β”€ utils/                  # Utility functions
└── main.py                 # Main training/evaluation script

πŸ“ˆ Experiment Tracking

The project integrates with MLflow for comprehensive experiment tracking:

  1. Start MLflow server:
mlflow server --host 0.0.0.0 --port 5000
  1. Access MLflow UI: Open http://localhost:5000 in your browser

  2. Configuration: Enable/disable MLflow tracking in conf/config.yaml:

mlflow_enable: true
mlflow_run_name: "experiment_name"

🎨 Visualization

The project includes visualization tools for:

  • Sample images with anomaly masks
  • Model predictions vs ground truth
  • Feature maps and attention visualizations
  • Quantitative results plots

Enable visualization in the configuration:

visualize: true
num_samples: 5

πŸ“Š Metrics

The project includes multiple metrics implementations for each supported dataset or model in the metrics/ directory.

Each metrics file implements a compute_metrics() function that takes ground truth and prediction arrays and returns comprehensive evaluation metrics for both image-level and pixel-level anomaly detection performance.

πŸ”§ Development

Adding New Datasets

  1. Create a new dataset class in datasets/
  2. Implement the required dataset interface
  3. Add configuration file in conf/dataset/
  4. Create corresponding metrics implementation in metrics/

Adding New Metrics

  1. Create a metrics file in metrics/ following the pattern metrics_<dataset_name>.py
  2. Implement the compute_metrics(gt_sp, pr_sp, gt_px, pr_px) function
  3. Include metrics like: AUROC, F1-Max, AP (image-level) and AUROC, F1-Max, AUPRO (pixel-level)
  4. Reference existing implementations: metrics/anomalydino.py, metrics/musc.py, metrics/mvtec_ad.py

Adding New Models

  1. Create a new model class in models/your_model/
  2. Implement the required interface methods
  3. Add configuration file in conf/model/
  4. Update the main script imports

Adding New Backbones

  1. Create a new backbone class in models/backbone/ inheriting from BaseBackbone
  2. Implement the required abstract methods:
    • load_pretrained_model(): Load the pretrained weights for the backbone
    • extract_features(images): Extract features from input images
  3. Consider creating model-specific variants (e.g., YourBackboneMuSc, YourBackboneAnomalyDINO)
  4. Register the backbone in backbone_factory.py in the appropriate factory functions
  5. Update model configurations to use the new backbone
  6. Test compatibility with existing models (AnomalyDINO, MuSc)

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

This project incorporates code from several research works:

πŸ“š References

About

Zero and few-shot industrial image anomaly detection framework comparing AnomalyDINO & MuSc models across MVTec AD, BTAD, and ViSA datasets with MLflow tracking and flexible configuration.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors