Skip to content

Commit c0c1168

Browse files
authored
Make passing the IP Adapter mask to the attention mechanism optional (#10346)
Make passing the IP Adapter mask to the attention mechanism optional if there is no need to apply it to a given IP Adapter.
1 parent 6dfaec3 commit c0c1168

File tree

1 file changed

+4
-0
lines changed

1 file changed

+4
-0
lines changed

src/diffusers/models/attention_processor.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4839,6 +4839,8 @@ def __call__(
48394839
)
48404840
else:
48414841
for index, (mask, scale, ip_state) in enumerate(zip(ip_adapter_masks, self.scale, ip_hidden_states)):
4842+
if mask is None:
4843+
continue
48424844
if not isinstance(mask, torch.Tensor) or mask.ndim != 4:
48434845
raise ValueError(
48444846
"Each element of the ip_adapter_masks array should be a tensor with shape "
@@ -5056,6 +5058,8 @@ def __call__(
50565058
)
50575059
else:
50585060
for index, (mask, scale, ip_state) in enumerate(zip(ip_adapter_masks, self.scale, ip_hidden_states)):
5061+
if mask is None:
5062+
continue
50595063
if not isinstance(mask, torch.Tensor) or mask.ndim != 4:
50605064
raise ValueError(
50615065
"Each element of the ip_adapter_masks array should be a tensor with shape "

0 commit comments

Comments
 (0)