Notice : Not really Celeb-DF-v1, Just Preprocessed AIHUB Deepfake manipulated Dataset by this tool's preprocess module
weights : root/training/weights
pretrained : root/training/pretrained
dataset : root/datasets
Structure of Preprocessed Dataset
# This image dataset is preprocessed by root/preprocessing/preprocess.py
# .mp4 -> .png
# added landmarks with filed numpy array
── Celeb-DF-v1
| ├── Celeb-real # Korean's face Image dataset's labelled with raw
| │ ├── frames
| │ │ ├── uuid_directories …
| │ │ │ └── *.png
| │ ├── landmarks
| │ │ ├── uuid_directories …
| │ │ │ └── *.npy
| │ │ └──
| ├── Celeb-synthesis # Korean's Image dataset's labelled with fake
| │ ├── frames
| │ │ ├── uuid_directories …
| │ │ │ └── *.png
| │ ├── landmarks
| │ │ ├── uuid_directories …
| │ │ │ └── *.npy
| │ │ └──
| ├── Youtube-real # Korean's Image dataset's labelled with true (which assumed to youtube source)
| │ ├── frames
| │ │ ├── uuid_directories …
| │ │ │ └── *.png
| │ ├── landmarks
| │ │ ├── uuid_directories …
| │ │ │ └── *.npy
| │ │ └──
1. Install and build Dockerfile
# 1. clone repository
git clone https://github.com/IIVRIICOLKM/DeepfakeBench
cd DeepfakeBench
# 2. Build Dockerfile
docker build -t deepfakebench .
# 3. If, install finished, Run Container with this command
docker run --gpus all -itd --name con1 -p 8888:8888 --volume=" $( pwd) " /:/deep_main --shm-size 64G deepfakebench
docker exec -it con1 bash
2. Install required packages
# Required Packages
pip install albumentations==1.1.0
pip install lmdb==1.7.3
3. Install jupyter notebook and Set Jupyter environment
# 1. Install jupyter notebook with pip
pip install jupyter notebook
# 2. Create General Config
jupyter notebook --generate-config -y
# 3. Run jupyter notebook
jupyter notebook --ip 0.0.0.0 --allow-root
# url : http://localhost:8888/?token=<your_access_token>
# initial_password : your_access_token
4. Run deep_main/training/test_with_korean_dataset.ipynb and get result cells