-
Notifications
You must be signed in to change notification settings - Fork 258
Open
Description
Hi, I have this error when running the t2v workflow. Thanks.
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 1350.32 MB offloaded, 5219.06 MB buffer reserved, lowvram patches: 0
Found quantization metadata version 1
Detected mixed precision quantization
Using mixed precision operations
model weight dtype torch.bfloat16, manual cast: torch.bfloat16
model_type FLUX
unet unexpected: ['audio_embeddings_connector.learnable_registers', 'audio_embeddings_connector.transformer_1d_blocks.0.attn1.k_norm.weight', 'audio_embeddings_connector.transformer_1d_blocks.0.attn1.q_norm.weight', 'audio_embeddings_connector.transformer_1d_blocks.0.attn1.to_k.bias', 'audio_embeddings_connector.transformer_1d_blocks.0.attn1.to_k.weight', 'audio_embeddings_connector.transformer_1d_blocks.0.attn1.to_out.0.bias', 'audio_embeddings_connector.transformer_1d_blocks.0.attn1.to_out.0.weight', 'audio_embeddings_connector.transformer_1d_blocks.0.attn1.to_q.bias', 'audio_embeddings_connector.transformer_1d_blocks.0.attn1.to_q.weight', 'audio_embeddings_connector.transformer_1d_blocks.0.attn1.to_v.bias', 'audio_embeddings_connector.transformer_1d_blocks.0.attn1.to_v.weight', 'audio_embeddings_connector.transformer_1d_blocks.0.ff.net.0.proj.bias', 'audio_embeddings_connector.transformer_1d_blocks.0.ff.net.0.proj.weight', 'audio_embeddings_connector.transformer_1d_blocks.0.ff.net.2.bias', 'audio_embeddings_connector.transformer_1d_blocks.0.ff.net.2.weight', 'audio_embeddings_connector.transformer_1d_blocks.1.attn1.k_norm.weight', 'audio_embeddings_connector.transformer_1d_blocks.1.attn1.q_norm.weight', 'audio_embeddings_connector.transformer_1d_blocks.1.attn1.to_k.bias', 'audio_embeddings_connector.transformer_1d_blocks.1.attn1.to_k.weight', 'audio_embeddings_connector.transformer_1d_blocks.1.attn1.to_out.0.bias', 'audio_embeddings_connector.transformer_1d_blocks.1.attn1.to_out.0.weight', 'audio_embeddings_connector.transformer_1d_blocks.1.attn1.to_q.bias', 'audio_embeddings_connector.transformer_1d_blocks.1.attn1.to_q.weight', 'audio_embeddings_connector.transformer_1d_blocks.1.attn1.to_v.bias', 'audio_embeddings_connector.transformer_1d_blocks.1.attn1.to_v.weight', 'audio_embeddings_connector.transformer_1d_blocks.1.ff.net.0.proj.bias', 'audio_embeddings_connector.transformer_1d_blocks.1.ff.net.0.proj.weight', 'audio_embeddings_connector.transformer_1d_blocks.1.ff.net.2.bias', 'audio_embeddings_connector.transformer_1d_blocks.1.ff.net.2.weight', 'video_embeddings_connector.learnable_registers', 'video_embeddings_connector.transformer_1d_blocks.0.attn1.k_norm.weight', 'video_embeddings_connector.transformer_1d_blocks.0.attn1.q_norm.weight', 'video_embeddings_connector.transformer_1d_blocks.0.attn1.to_k.bias', 'video_embeddings_connector.transformer_1d_blocks.0.attn1.to_k.weight', 'video_embeddings_connector.transformer_1d_blocks.0.attn1.to_out.0.bias', 'video_embeddings_connector.transformer_1d_blocks.0.attn1.to_out.0.weight', 'video_embeddings_connector.transformer_1d_blocks.0.attn1.to_q.bias', 'video_embeddings_connector.transformer_1d_blocks.0.attn1.to_q.weight', 'video_embeddings_connector.transformer_1d_blocks.0.attn1.to_v.bias', 'video_embeddings_connector.transformer_1d_blocks.0.attn1.to_v.weight', 'video_embeddings_connector.transformer_1d_blocks.0.ff.net.0.proj.bias', 'video_embeddings_connector.transformer_1d_blocks.0.ff.net.0.proj.weight', 'video_embeddings_connector.transformer_1d_blocks.0.ff.net.2.bias', 'video_embeddings_connector.transformer_1d_blocks.0.ff.net.2.weight', 'video_embeddings_connector.transformer_1d_blocks.1.attn1.k_norm.weight', 'video_embeddings_connector.transformer_1d_blocks.1.attn1.q_norm.weight', 'video_embeddings_connector.transformer_1d_blocks.1.attn1.to_k.bias', 'video_embeddings_connector.transformer_1d_blocks.1.attn1.to_k.weight', 'video_embeddings_connector.transformer_1d_blocks.1.attn1.to_out.0.bias', 'video_embeddings_connector.transformer_1d_blocks.1.attn1.to_out.0.weight', 'video_embeddings_connector.transformer_1d_blocks.1.attn1.to_q.bias', 'video_embeddings_connector.transformer_1d_blocks.1.attn1.to_q.weight', 'video_embeddings_connector.transformer_1d_blocks.1.attn1.to_v.bias', 'video_embeddings_connector.transformer_1d_blocks.1.attn1.to_v.weight', 'video_embeddings_connector.transformer_1d_blocks.1.ff.net.0.proj.bias', 'video_embeddings_connector.transformer_1d_blocks.1.ff.net.0.proj.weight', 'video_embeddings_connector.transformer_1d_blocks.1.ff.net.2.bias', 'video_embeddings_connector.transformer_1d_blocks.1.ff.net.2.weight']
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
no CLIP/text encoder weights in checkpoint, the text encoder model will not be loaded.
Requested to load LTXAV
!!! Exception during processing !!! CUDA error: invalid argument
Search for `cudaErrorInvalidValue' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Traceback (most recent call last):
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\execution.py", line 518, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\execution.py", line 329, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\execution.py", line 303, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\execution.py", line 291, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
return method(locked_class, **inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\comfy_api\latest\_io.py", line 1570, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 950, in execute
samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 1050, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 984, in outer_sample
self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds, self.model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\comfy\sampler_helpers.py", line 130, in prepare_sampling
return executor.execute(model, noise_shape, conds, model_options=model_options, force_full_load=force_full_load)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\comfy\sampler_helpers.py", line 138, in _prepare_sampling
comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required + inference_memory, minimum_memory_required=minimum_memory_required + inference_memory, force_full_load=force_full_load)
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\comfy\model_management.py", line 674, in load_models_gpu
free_memory(total_memory_required[device] * 1.1 + extra_mem, device)
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\comfy\model_management.py", line 606, in free_memory
if current_loaded_models[i].model_unload(memory_to_free):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\comfy\model_management.py", line 532, in model_unload
self.model.detach(unpatch_weights)
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\comfy\model_patcher.py", line 989, in detach
self.unpatch_model(self.offload_device, unpatch_weights=unpatch_all)
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\ComfyUI\comfy\model_patcher.py", line 857, in unpatch_model
self.model.to(device_to)
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1371, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 930, in _apply
module._apply(fn)
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 930, in _apply
module._apply(fn)
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 930, in _apply
module._apply(fn)
[Previous line repeated 4 more times]
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 957, in _apply
param_applied = fn(param)
^^^^^^^^^
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1357, in convert
return t.to(
^^^^^
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\bitsandbytes\nn\modules.py", line 352, in to
super().to(device=device, dtype=dtype, non_blocking=non_blocking),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\DuongPC\AI\temp\ComfyUI_Easy\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\bitsandbytes\nn\modules.py", line 402, in __torch_function__
return super().__torch_function__(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.AcceleratorError: CUDA error: invalid argument
Search for `cudaErrorInvalidValue' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Metadata
Metadata
Assignees
Labels
No labels