- DAS3D: Dual-modality Anomaly Synthesis for 3D Anomaly
This is the official implementation of paper DAS3D: Dual-modality Anomaly Synthesis for 3D Anomaly Detection. Synthesizing anomaly samples has proven to be an effective strategy for self-supervised 2D industrial anomaly detection. However, this approach has been rarely explored in multi-modality anomaly detection, particularly involving 3D and RGB images. In this paper, we propose a novel dual-modality augmentation method for 3D anomaly synthesis, which is simple and capable of mimicking the characteristics of 3D defects. Incorporating with our anomaly synthesis method, we introduce a reconstruction-based discriminative anomaly detection network, in which a dual-modal discriminator is employed to fuse the original and reconstructed embedding of two modalities for anomaly detection. Additionally, we design an augmentation dropout mechanism to enhance the generalizability of the discriminator. Extensive experiments show that our method outperforms the state-of-the-art methods on detection precision and achieves competitive segmentation performance on both MVTec 3D-AD and Eyescandies datasets.
conda create -n dp-feta python=3.8 -y && conda activate das3d
pip install --upgrade pip
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
For MVTec 3D-AD and Eyescandies datasets, we download and pre-process these datasets following M3DM. For the DTD dataset, please run:
wget https://www.robots.ox.ac.uk/~vgg/data/dtd/download/dtd-r1.0.1.tar.gz
tar -xf dtd-r1.0.1.tar.gz
rm dtd-r1.0.1.tar.gz
Training on the local service, please run:
python trainer.py --ngpu 1 --obj_id -1 --layer_size 2layer --mode_type Fusion0 --data_dir <your-3d-data-dir> --dataset_type Mvtec3D_AD --test_bs 4 --drop_p 0.5 --perlin_t 0.65 --low_peak 1.5 --high_peak 0.3 --min_noise 0.1 --exp_name image_das3d
Before evaluation, you need to change the path of the model weight mvted3dad.checkpoint_fusion.<category> in ./checkpoint/checkpoint.yaml.
And then, run:
python test.py --ngpu 1 --obj_id -1 --layer_size 2layer --mode_type Fusion0 --data_dir <your-3d-data-dir>