-
Notifications
You must be signed in to change notification settings - Fork 65
Description
Custom Node Testing
- I have tried disabling custom nodes and the issue persists (see how to disable custom nodes if you need help)
Your question
When running VAE encode on AMD RX 7900XT, ComfyUI crashes and shows "Reconnecting..." status while console exits. To replicate:
-
Set up a simple workflow that uses VAE encode (i,e. load image -> vae encode -> vae decode -> preview image)
-
Click Run
Tried:
- Going through GPU steps https://docs.comfy.org/troubleshooting/overview#amd-gpu-issues
- Reinstalling Torch
- Updating Graphics Driver
- Switching from python 3.10 to 3.12
Here are the logs which are not very useful since it is a hard crash.
Logs
[START] Security scan
[DONE] Security scan
ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2026-01-13 16:59:05.302
** Platform: Windows
** Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
** Python executable: C:\Users\dazdn\AppData\Local\Python\pythoncore-3.12-64\python.exe
** ComfyUI Path: C:\ComfyUI-Zluda
** ComfyUI Base Folder Path: C:\ComfyUI-Zluda
** User directory: C:\ComfyUI-Zluda\user
** ComfyUI-Manager config path: C:\ComfyUI-Zluda\user__manager\config.ini
** Log path: C:\ComfyUI-Zluda\user\comfyui.log
Prestartup times for custom nodes:
2.1 seconds: C:\ComfyUI-Zluda\custom_nodes\ComfyUI-Manager
Checkpoint files will always be loaded safely.
:: Checking package versions...
Found pydantic: 2.12.5, pydantic-settings: 2.12.0
:: Pydantic packages are compatible, skipping reinstall
Installed version of comfyui-frontend-package: 1.36.14
Installed version of comfyui-workflow-templates: 0.8.4
Installed version of av: 16.1.0
Installed version of comfyui-embedded-docs: 0.4.0
Installed version of comfy-kitchen: 0.2.6
:: Package version check complete.
:: ------------------------ ZLUDA ----------------------- ::
:: Triton not installed
:: ONNX Runtime not installed — skipping patch.
:: CUDA device detected: AMD Radeon(TM) Graphics
Total VRAM 24764 MB, total RAM 64673 MB
pytorch version: 2.9.0+rocmsdk20251116
Set: torch.backends.cudnn.enabled = False for better AMD performance.
AMD arch: gfx1036
ROCm version: (7, 1)
Set vram state to: NORMAL_VRAM
Device: cuda:0 AMD Radeon(TM) Graphics : native
Using async weight offloading with 2 streams
Enabled pinned memory 29102.0
Found comfy_kitchen backend triton: {'available': False, 'disabled': True, 'unavailable_reason': "ImportError: No module named 'triton'", 'capabilities': []}
Found comfy_kitchen backend eager: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
Found comfy_kitchen backend cuda: {'available': True, 'disabled': True, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8']}
Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention
Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
ComfyUI version: 0.9.1
ComfyUI frontend version: 1.36.14
[Prompt Server] web root: C:\Users\dazdn\AppData\Local\Python\pythoncore-3.12-64\Lib\site-packages\comfyui_frontend_package\static
Loading: ComfyUI-Manager (V3.39.2)
[ComfyUI-Manager] network_mode: public
[ComfyUI-Manager] ComfyUI per-queue preview override detected (PR Comfy-Org#11261). Manager's preview method feature is disabled. Use ComfyUI's --preview-method CLI option or 'Settings > Execution > Live preview method'.
ComfyUI Revision: 6011 [ee761e4] | Released on '2026-01-13'
Import times for custom nodes:
0.0 seconds: C:\ComfyUI-Zluda\custom_nodes\websocket_image_save.py
0.0 seconds: C:\ComfyUI-Zluda\custom_nodes\cfz_vae_loader.py
0.0 seconds: C:\ComfyUI-Zluda\custom_nodes\cfz_cudnn.toggle.py
0.0 seconds: C:\ComfyUI-Zluda\custom_nodes\cfz_patcher.py
0.0 seconds: C:\ComfyUI-Zluda\custom_nodes\CFZ-caching
0.0 seconds: C:\ComfyUI-Zluda\custom_nodes\ovum-cudnn-wrapper
0.8 seconds: C:\ComfyUI-Zluda\custom_nodes\ComfyUI-Manager
Context impl SQLiteImpl.
Will assume non-transactional DDL.
Assets scan(roots=['models']) completed in 0.012s (created=0, skipped_existing=20, total_seen=20)
Starting server
To see the GUI go to: http://127.0.0.1:8188
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[CUDNNWrapper] reading config from C:\ComfyUI-Zluda\classes_to_cudnn_wrap.txt
CUDNNWrapper: regex /image.*video.*encode/i matched no classes
CUDNNWrapper: regex /wanvideo.decode/i matched no classes
CUDNNWrapper: 'WanVideoClipVisionEncode' not found to wrap
CUDNNWrapper: 'WanVideoImageToVideoEncode' not found to wrap
CUDNNWrapper: 'WanVideoEncode' not found to wrap
CUDNNWrapper: 'WanVideoDecode' not found to wrap
CUDNNWrapper: Wrapped 2 node(s) by exact string match: VAEDecode, VAEDecodeTiled
CUDNNWrapper: Wrapped 9 node(s) via regex /vae.(encode|decode)/i: VAEEncode, VAEEncodeForInpaint, VAEEncodeTiled, StableCascade_StageC_VAEEncode, VAEEncodeAudio, VAEDecodeAudio, LTXVAudioVAEEncode, LTXVAudioVAEDecode, VAEDecodeHunyuan3D
[DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
[DEPRECATION WARNING] Detected import of deprecated legacy API: /extensions/core/groupNode.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
[CUDNNWrapper] reading config from C:\ComfyUI-Zluda\classes_to_cudnn_wrap.txt
CUDNNWrapper: regex /image.*video.*encode/i matched no classes
CUDNNWrapper: regex /wanvideo.*decode/i matched no classes
CUDNNWrapper: 'WanVideoClipVisionEncode' not found to wrap
CUDNNWrapper: 'WanVideoImageToVideoEncode' not found to wrap
CUDNNWrapper: 'WanVideoEncode' not found to wrap
CUDNNWrapper: 'WanVideoDecode' not found to wrap
[CFZ Load] No cache files found
[CFZ Load] No cache files found
[DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/buttonGroup.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
[DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/button.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
FETCH ComfyRegistry Data: 5/119
FETCH ComfyRegistry Data: 10/119
FETCH ComfyRegistry Data: 15/119
FETCH ComfyRegistry Data: 25/119
[OVUM_CUDDN_TOGGLE] torch.backends.cudnn.enabled set to True (was False)
got prompt
Using split attention in VAE
Using split attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
[OVUM_CUDDN_TOGGLE] torch.backends.cudnn.enabled set to False (was True)
Requested to load AutoencodingEngine
loaded completely; 18354.64 MB usable, 159.87 MB loaded, full load: True
Other
No response