Skip to content

Commit 6a8ce6d

Browse files
authored
Merge branch 'main' into test-sana-lora-training
2 parents 1c6c5ee + 9c0e20d commit 6a8ce6d

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

examples/dreambooth/README_sana.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ This will also allow us to push the trained LoRA parameters to the Hugging Face
7373
Now, we can launch training using:
7474

7575
```bash
76-
export MODEL_NAME="Efficient-Large-Model/Sana_1600M_1024px_diffusers"
76+
export MODEL_NAME="Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers"
7777
export INSTANCE_DIR="dog"
7878
export OUTPUT_DIR="trained-sana-lora"
7979

@@ -124,4 +124,4 @@ We provide several options for optimizing memory optimization:
124124
* `cache_latents`: When enabled, we will pre-compute the latents from the input images with the VAE and remove the VAE from memory once done.
125125
* `--use_8bit_adam`: When enabled, we will use the 8bit version of AdamW provided by the `bitsandbytes` library.
126126

127-
Refer to the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana) of the `SanaPipeline` to know more about the models available under the SANA family and their preferred dtypes during inference.
127+
Refer to the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana) of the `SanaPipeline` to know more about the models available under the SANA family and their preferred dtypes during inference.

0 commit comments

Comments
 (0)