Skip to content

Commit 1066de8

Browse files
authored
[Qwen LoRA training] fix bug when offloading (huggingface#12440)
* fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled
1 parent 2d69bac commit 1066de8

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/dreambooth/train_dreambooth_lora_qwen_image.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1338,7 +1338,7 @@ def compute_text_embeddings(prompt, text_encoding_pipeline):
13381338
batch["pixel_values"] = batch["pixel_values"].to(
13391339
accelerator.device, non_blocking=True, dtype=vae.dtype
13401340
)
1341-
latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist)
1341+
latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist)
13421342
if train_dataset.custom_instance_prompts:
13431343
with offload_models(text_encoding_pipeline, device=accelerator.device, offload=args.offload):
13441344
prompt_embeds, prompt_embeds_mask = compute_text_embeddings(

0 commit comments

Comments
 (0)