Nvidia 50 Series (Blackwell) support thread: How to get ComfyUI running on your new 50 series GPU. #6643
Replies: 86 comments 298 replies
-
|
There are also Windows prebuilt wheels of pytorch on cuda 128 that nvidia gave w-e-w to publish https://huggingface.co/w-e-w/torch-2.6.0-cu128.nv |
Beta Was this translation helpful? Give feedback.
-
|
The Windows version said: |
Beta Was this translation helpful? Give feedback.
-
|
nf4 node does not work. |
Beta Was this translation helpful? Give feedback.
-
|
ImportError: tokenizers>=0.21,<0.22 is required for a normal functioning of this module, but found tokenizers==0.20.3. How can I do to fix problem? |
Beta Was this translation helpful? Give feedback.
-
|
Sorry - I have been banging my head against this for 6 hours - trying to run a cosmos workflow with 5090 - without SageAttention or TorchCompile - a 14 minute job on 4090 is taking 25 minutes. To turn them on I installed Triton and Sageattention - then I get this error afterwards whether I bypass the patch and compile or not:- KSampler I have run this through GPT and followed every instruction. Step 1: Verify CUDA Toolkit Installation python If it does not match 12.8, you may need to reinstall PyTorch with the correct CUDA version. ✅ Step 2: Check NVIDIA Toolkit and Drivers Update your NVIDIA driver (Download latest) ✅ Step 3: Fix Microsoft Visual Studio Build Tools Run: sh |
Beta Was this translation helpful? Give feedback.
-
|
anyone knows if other pytorch's cuda versions like 12.6 will work with blackwell? |
Beta Was this translation helpful? Give feedback.
-
|
When I use [ComfyUI package with a cuda 12.8 torch build], many custom_nodes are "IMPORT FAILED" including Manager, InstantID, ReActor ... |
Beta Was this translation helpful? Give feedback.
-
Is is possible to build a pytorch that works for 5090? If so how to do it? |
Beta Was this translation helpful? Give feedback.
-
|
Having given up on Portable for now - too many errors lol - I am using WSL - Ubuntu - everything else is set up and working in Comfyui - I have the latest pytorch nightly from pip install --pre torch torchvision torchaudio --index-url[ https://download.pytorch.org/whl/nightly/cu128] - but when I use Sageattention I get this: |
Beta Was this translation helpful? Give feedback.
-
|
im using comfyui over pinokio, is there any way to update my comfy to work with my new 5080? i deleted the old files and changed in the torch.js on the patch above, but it wont start |
Beta Was this translation helpful? Give feedback.
-
|
Will using a docker allow for a working torchvision in windows? |
Beta Was this translation helpful? Give feedback.
-
|
Can someone write a little guide for getting a docker running torchvision, etc on window with the portable Blackwell comfyui release? Please write it for a normal user who has no programming knowledge. There are basic instructions in the OP, but what even is a docker? "docker run -p 8188:8188 --gpus all -it --rm nvcr.io/nvidia/pytorch:25.01-py3" Where is this command supposed to be run? Is a docker something to be installed to system python or standalone folder? How would this be installed on a fresh portable comfyui install? With more Blackwell cards trickling out, there will most likely be more users needing help setting this up. Thank you. |
Beta Was this translation helpful? Give feedback.
-
|
To anyone having trouble with the portable version for Blackwell gpus, do not update it! I noticed during updating that it uninstalled the cu128 and installed another version for example. Torchvision works with a fresh install without updating! |
Beta Was this translation helpful? Give feedback.
-
|
Excelente |
Beta Was this translation helpful? Give feedback.
-
|
Are there no windows pytorch options available for cuda 12.8 still? Any help would be greatly appreciated. |
Beta Was this translation helpful? Give feedback.
-
|
Thank you, friend.
My previous problem has been solved,
My gguf and nunkaku can both run.
But there is an old problem,
[2025-06-10 06:31:08.647] [info] Initializing QuantizedFluxModel on device 0
[2025-06-10 06:31:08.681] [info] Loading weights from E:\models\diffusion_models\svdq-fp4-flux.1-dev\transformer_blocks.safetensors
[2025-06-10 06:31:08.685] [warning] Failed to load safetensors using method MIO: CUDA error: operation not supported (at C:\Users\muyang\Desktop\nunchaku-dev\src\Serialization.cpp:130)
[2025-06-10 06:31:18.948] [info] Done.
Injecting quantized module
[2025-06-10 06:31:19.296] [info] Set attention implementation to nunchaku-fp16
Loading configuration from E:\models\diffusion_models\svdq-fp4-flux.1-dev\comfy_config.json
model_type FLUX
Requested to load CLIPVisionModelProjection
loaded completely 6937.49091796875 787.7150573730469 True
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
clip missing: ['text_projection.weight']
It's always like this CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
clip missing: ['text_projection.weight'],
"clip missing: ['text_projection.weight']" What exactly does this mean
It can also be generated in the end, why does it keep prompting this! Have you ever seen a similar problem?
At 2025-05-15 22:08:22, "H.W.Prinz" ***@***.***> wrote:
@a1chera
did you get it fixed?
i had the same issue, and it is not solvable from my point of view atm (already had some discussion with Dr. LT.Data on it)
the Nighty portable packages come with python 3.13.2
that is the only problem
you can install a bunch of nodes and stuff not taking care on it, but some, even essential as like Manager
don't like 3.13..... at all.
but if you're going for an actual download default released package, it will have python 3.12.10 pytorch2.7.0+cu12.8
all good with runs flawlessly
have fun
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
|
Beta Was this translation helpful? Give feedback.
-
|
hello anyone can help me ? i was broke my package comfy ui and cannot install custom node Restarting... [Legacy Mode] Command: ['"D:\ComfyUI_cu128_50XX\ComfyUI_cu128_50XX\python_embeded\python.exe"', '"ComfyUI\main.py"'] D:\ComfyUI_cu128_50XX\ComfyUI_cu128_50XX>pause if i try to restart a note appears like this how i solved this eror ? |
Beta Was this translation helpful? Give feedback.
-
|
hello anyone can help me ? i was broke my package comfy ui and cannot install custom node Restarting... [Legacy Mode] Command: ['"D:\ComfyUI_cu128_50XX\ComfyUI_cu128_50XX\python_embeded\python.exe"', '"ComfyUI\main.py"'] D:\ComfyUI_cu128_50XX\ComfyUI_cu128_50XX>pause if i try to restart a note appears like this how i solved this eror ? @comfyanonymous thankyu for attention |
Beta Was this translation helpful? Give feedback.
-
|
I'm having trouble with my current ComfyUI setup because it's outdated. Looks like I’ll have to download a fresh copy, but I really want to avoid going through the nightmare of fixing dependencies, PyTorch versions, xformers, triton and all that again. Does anyone have the latest version of ComfyUI that works smoothly on RTX 5090 or the new Blackwell series GPUs? I’d really appreciate if you could share it or point me to a reliable source. Or should I somehow update my Comfy without destroying everything? |
Beta Was this translation helpful? Give feedback.
-
|
This is not a plug‑and‑play solution, but I successfully enabled GPU acceleration on an NVIDIA RTX 5060 Ti (Blackwell) under Ubuntu/WSL2 using PyTorch nightly builds with CUDA 12.8. It requires manual setup and additional dependencies, but it proves that Blackwell GPUs already work with ComfyUI before official PyTorch support is released. Full details are documented here: https://medium.com/@v445683044/rtx-5000-comfyui-why-gpu-doesnt-work-and-how-to-fix-it-september-2025-0f6468bde81a |
Beta Was this translation helpful? Give feedback.
-
|
SageAttention working on RTX 5090 + PyTorch 2.11 For anyone with a 50-series card looking to add SageAttention for ~35% faster diffusion sampling: Prebuilt wheel + build instructions: https://github.com/mobcat40/sageattention-blackwell Existing wheels don't work with PyTorch 2.11 nightly (DLL errors). This wheel is compiled against PyTorch 2.11 + CUDA 12.8 for sm_120. Heads up for Qwen/Wan users: The --use-sage-attention flag causes black output. Use KJNodes "Patch Sage Attention" node with sageattn_qk_int8_pv_fp16_cuda backend instead. |
Beta Was this translation helpful? Give feedback.
-
|
I have another solution that works for me without any problems, for over a year now - just use uv for managing my ComfyUI installation instead of the default pip+poetry. Here's my setup if anyone wants to try it: https://github.com/ssuukk/comfy-uv, currently it runs on CUDA 13.0, but you can easily fall back to 12.8. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
Gemini 3 pro is great at debugging and fixing comfyui installs
…On Sat, 3 Jan 2026 at 08:55, mattbirk6 ***@***.***> wrote:
Okay I think my install is all kinds of screwed up. I can't get ComfyUI
manager to update past 3.39. :(
—
Reply to this email directly, view it on GitHub
<#6643 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAA5XTIW5OIZEP2HIR5BKAD4E3EMPAVCNFSM6AAAAABWD7G2YGVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTKMZZGM2DKNA>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
|
回响科技招聘-tusiai 模型部署、推理服务框架优化、计算资源调度以及与硬件协同优化,确保模型在生产环境中的高效稳定运行。 本科及以上学历,计算机相关专业,3年以上相关经验 |
Beta Was this translation helpful? Give feedback.
-
|
I added one for folks on Windows / PyTorch 2.11 / Python 3.12 / Cuda 13.0. For the one guy who is pulling out his hair. https://github.com/tylerMH/flash-attention-windows-5090/releases/tag/v2.8.3-cu130-cp312-torch2.11-win-rtx5090 |
Beta Was this translation helpful? Give feedback.
-
Forgive if am missing something, may I ask what I may be missing out on with my current setup?
|
Beta Was this translation helpful? Give feedback.
-
Windows Native + cu130 + One-Click Setup (No WSL2, No Docker)I spent 3 days getting ComfyUI fully working on Windows native with RTX 5090, and packaged the result into a reproducible one-click setup: https://github.com/hiroki-abe-58/ComfyUI-Win-Blackwell What it does
Included tools
The 5 rules I found (break any one and the env dies)
MIT licensed. PRs welcome, especially for additional custom node verifications. Detailed writeup: English (Medium) / 日本語 (Qiita) |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.




Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I will try keeping this post up to date as much as possible with the latest developments.
To get your nvidia 50 series GPU working with ComfyUI you need a pytorch that has been built against cuda 12.8
In the next few months there will likely be a lot of performance improvements landing in pytorch for these GPUs so I recommend coming back to this page and updating frequently.
Windows
The recommended download is the latest standalone portable package or desktop installer that you can download from the README
Manual Install
If you install stable pytorch make sure it is cu128.
pytorch nightly cu128 is available for Windows and Linux:
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128You can also use the Nvidia Pytorch Docker Container as an alternative which might give more performance.
Link: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch
Here's how to use it:
docker run -p 8188:8188 --gpus all -it --rm nvcr.io/nvidia/pytorch:25.01-py3Inside the docker container:
Beta Was this translation helpful? Give feedback.
All reactions