Skip to content

Commit 6cf0be5

Browse files
authored
fix warning log for Transformer SD3 (huggingface#8496)
fix warning log
1 parent ec068f9 commit 6cf0be5

File tree

1 file changed

+4
-3
lines changed

1 file changed

+4
-3
lines changed

src/diffusers/models/transformers/transformer_sd3.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -282,9 +282,10 @@ def forward(
282282
# weight the lora layers by setting `lora_scale` for each PEFT layer
283283
scale_lora_layers(self, lora_scale)
284284
else:
285-
logger.warning(
286-
"Passing `scale` via `joint_attention_kwargs` when not using the PEFT backend is ineffective."
287-
)
285+
if joint_attention_kwargs is not None and joint_attention_kwargs.get("scale", None) is not None:
286+
logger.warning(
287+
"Passing `scale` via `joint_attention_kwargs` when not using the PEFT backend is ineffective."
288+
)
288289

289290
height, width = hidden_states.shape[-2:]
290291

0 commit comments

Comments
 (0)