You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
--validation_prompt="a 3dicon, a llama eating ramen" \
91
78
--resolution=1024 \
92
79
--train_batch_size=1 \
93
80
--gradient_accumulation_steps=4 \
94
81
--use_8bit_adam \
95
-
--rank=16 \
82
+
--rank=8 \
96
83
--learning_rate=2e-4 \
97
84
--report_to="wandb" \
98
-
--lr_scheduler="constant" \
99
-
--lr_warmup_steps=0 \
85
+
--lr_scheduler="constant_with_warmup" \
86
+
--lr_warmup_steps=100 \
100
87
--max_train_steps=1000 \
101
-
--cache_latents\
88
+
--cache_latents\
102
89
--gradient_checkpointing \
103
90
--validation_epochs=25 \
104
91
--seed="0" \
@@ -128,6 +115,5 @@ We provide several options for optimizing memory optimization:
128
115
*`--offload`: When enabled, we will offload the text encoder and VAE to CPU, when they are not used.
129
116
*`cache_latents`: When enabled, we will pre-compute the latents from the input images with the VAE and remove the VAE from memory once done.
130
117
*`--use_8bit_adam`: When enabled, we will use the 8bit version of AdamW provided by the `bitsandbytes` library.
131
-
*`--instance_prompt` and no `--caption_column`: when only an instance prompt is provided, we will pre-compute the text embeddings and remove the text encoders from memory once done.
132
118
133
119
Refer to the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/) of the `HiDreamImagePipeline` to know more about the model.
0 commit comments