Replies: 2 comments
-
same situation. help! |
Beta Was this translation helpful? Give feedback.
0 replies
-
Same issue on Linux |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I launched my first generation on stable diffusion. The prompt was: a cat with a hat, masterpiece".
But, as the generation is almost ended, I have this error :
NotImplementedError: No operator found for
memory_efficient_attention_forward
with inputs:query : shape=(1, 324, 1, 512) (torch.float32)
key : shape=(1, 324, 1, 512) (torch.float32)
value : shape=(1, 324, 1, 512) (torch.float32)
attn_bias : <class 'NoneType'>
p : 0.0
cutlassF
is not supported because:device=cpu (supported: {'cuda'})
flshattF
is not supported because:device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
max(query.shape[-1] != value.shape[-1]) > 128
tritonflashattF
is not supported because:device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
max(query.shape[-1] != value.shape[-1]) > 128
Operator wasn't built - see
python -m xformers.info
for more infotriton is not available
smallkF
is not supported because:max(query.shape[-1] != value.shape[-1]) > 32
unsupported embed per head: 512
Does someone have an idea how can I fix this error ?
PS: I've got the latest version of stable diffusion, on windows 10.
Beta Was this translation helpful? Give feedback.
All reactions