Stable Diffusion trainer with scalable dataset size and hardware usage.
[!] IN EARLY DEVELOPMENT, CONFIGS AND ARGUMENTS SUBJECT TO BREAKING CHANGES
- Can run with 10G or less VRAM without losing speed thanks to xformers memory efficient attention and int8 optimizers.
- Aspect Ratio Bucketing
- DreamBooth
- CLIP skip
- WandB logging
Linux is recommended, on Windows you have to install bitsandbytes
manually for int8 optimizers.
conda env create environment.yml
conda activate ssdt
CUDA toolkit and torch should be installed manually.
pip install -r requirements.txt
Documentation: configs/README.md
.
(Link)
configs/native.yaml
(for native training) and configs/dreambooth.yaml
(for DreamBooth) provided as examples.
If you are running native training, proceed to the next step.
If you are running DreamBooth, run this to generate class (regularization) images:
python gen_class_imgs.py --config configs/your_config.yaml
Then run the training:
python train.py --config configs/your_config.yaml
Note although the checkpoints have .ckpt
extension, they are NOT directly usable to interfaces based on
the official SD code base
like WebUI. To convert them into SD checkpoints:
python convert_to_sd.py PATH_TO_THE_CKPT OUTPUTDIR --no-text-encoder --unet-dtype fp16
--no-text-encoder --unet-dtype fp16
results a ~2GB checkpoint, containing fp16 UNet and fp32 VAE weights, WebUI
supports loading that. For further reducing checkpoint size to ~1.6GB if target clients have external VAE already,
add --no-vae
to remove VAE weights from checkpoint, leaving fp16 UNet weights only.
If you are not using WebUI and having issues, remove --no-text-encoder
.
You may change trainer.accelerator
.
(Docs)