Replies: 1 comment 1 reply
-
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, since yesterday I have this error in my comfyui, how to fix this ?
C:\Users\max\AI GENERATE\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.) out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False) Requested to load AutoencodingEngine Loading 1 new model loaded completely 0.0 159.87335777282715 True
Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely.
Requested to load Flux
PyTorch version: 2.3.1+cu121
CUDA version (PyTorch built with): 12.1
CUDA runtime version: 8.6
GPU: RTX 3060 12 GB
Model used: Flux NF4 v2
When I upgrade pytorch to 2.4, comfyui doesn't work
Beta Was this translation helpful? Give feedback.
All reactions