-
Notifications
You must be signed in to change notification settings - Fork 11.5k
Description
Custom Node Testing
- I have tried disabling custom nodes and the issue persists (see how to disable custom nodes if you need help)
Your question
using ltx 2 t2v and distilled templates
lowvram: loaded module regularly gemma3_12b.transformer.vision_model.encoder.layers.0.layer_norm1 LayerNorm((1152,), eps=1e-05, elementwise_affine=True)
lowvram: loaded module regularly gemma3_12b.transformer.multi_modal_projector.mm_soft_emb_norm RMSNorm()
lowvram: loaded module regularly gemma3_12b.transformer.model.layers.9.self_attn.q_norm RMSNorm()
lowvram: loaded module regularly gemma3_12b.transformer.model.layers.9.self_attn.k_norm RMSNorm()
lowvram: loaded module regularly gemma3_12b.transformer.model.layers.8.self_attn.q_norm RMSNorm()
lowvram: loaded module regularly gemma3_12b.transformer.model.layers.8.self_attn.k_norm RMSNorm()
loaded partially; 21698.80 MB usable, 21548.80 MB loaded, 4415.52 MB offloaded, 150.00 MB buffer reserved, lowvram patches: 0
!!! Exception during processing !!! Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument tensors in method wrapper_CUDA_cat)
Traceback (most recent call last):
File "J:\ComfyUI_windows_portable\ComfyUI\execution.py", line 518, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\ComfyUI\execution.py", line 329, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 165, in async_map_node_over_list_with_metadata
results = await original_map_node_over_list(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\ComfyUI\execution.py", line 303, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "J:\ComfyUI_windows_portable\ComfyUI\execution.py", line 291, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 77, in encode
return (clip.encode_from_tokens_scheduled(tokens), )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 207, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 271, in encode_from_tokens
o = self.cond_stage_model.encode_token_weights(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\lt.py", line 103, in encode_token_weights
out_vid = self.video_embeddings_connector(out)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\lightricks\embeddings_connector.py", line 282, in forward
hidden_states = torch.cat((hidden_states, learnable_registers[hidden_states.shape[1]:].unsqueeze(0).repeat(hidden_states.shape[0], 1, 1)), dim=1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument tensors in method wrapper_CUDA_cat)
Prompt executed in 253.72 seconds
Logs
Other
No response