Skip to content

Commit 6c5637c

Browse files
committed
FIxes enable_xformers_memory_efficient_attention()
1 parent 751e250 commit 6c5637c

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

src/diffusers/models/attention.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -241,7 +241,10 @@ def set_use_memory_efficient_attention_xformers(
241241
op_fw, op_bw = attention_op
242242
dtype, *_ = op_fw.SUPPORTED_DTYPES
243243
q = torch.randn((1, 2, 40), device="cuda", dtype=dtype)
244-
_ = xops.memory_efficient_attention(q, q, q)
244+
try:
245+
_ = xops.memory_efficient_attention(q, q, q)
246+
except:
247+
_ = xops.ops.memory_efficient_attention(q, q, q)
245248
except Exception as e:
246249
raise e
247250

0 commit comments

Comments
 (0)