-
-
Notifications
You must be signed in to change notification settings - Fork 77
Open
Description
Hi there,
I’m trying to integrate the new "SAM3 Segmentation" node into one of my workflows. Unfortunately, the workflow already uses about 95% of my VRAM, and adding the SAM3 node pushes usage past 99%, causing it to stop working.
As a workaround, I tried switching the device selection from "Auto" to "CPU", but that led to the crash shown below.
It would be really helpful to have an option like “unload model after run,” and fixing CPU device selection would also make a big difference.
Thanks a lot!
!!! Exception during processing !!! Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Traceback (most recent call last):
File "E:\ComfyUI\ComfyUI_windows_portable_250824\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI_windows_portable_250824\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI_windows_portable_250824\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "E:\ComfyUI\ComfyUI_windows_portable_250824\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
File "E:\ComfyUI\ComfyUI_windows_portable_250824\ComfyUI\custom_nodes\comfyui-rmbg\AILab_SAM3Segment.py", line 203, in segment
img_pil, mask_tensor, mask_rgb = self._run_single(
~~~~~~~~~~~~~~~~^
processor,
^^^^^^^^^^
...<7 lines>...
background_color,
^^^^^^^^^^^^^^^^^
)
^
File "E:\ComfyUI\ComfyUI_windows_portable_250824\ComfyUI\custom_nodes\comfyui-rmbg\AILab_SAM3Segment.py", line 173, in _run_single
state = processor.set_text_prompt(text, state)
File "E:\ComfyUI\ComfyUI_windows_portable_250824\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 120, in decorate_context
return func(*args, **kwargs)
File "E:\ComfyUI\ComfyUI_windows_portable_250824\ComfyUI\custom_nodes\comfyui-rmbg\sam3\model\sam3_image_processor.py", line 125, in set_text_prompt
return self._forward_grounding(state)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "E:\ComfyUI\ComfyUI_windows_portable_250824\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 120, in decorate_context
return func(*args, **kwargs)
File "E:\ComfyUI\ComfyUI_windows_portable_250824\ComfyUI\custom_nodes\comfyui-rmbg\sam3\model\sam3_image_processor.py", line 184, in _forward_grounding
outputs = self.model.forward_grounding(
backbone_out=state["backbone_out"],
...<2 lines>...
find_target=None,
)
File "E:\ComfyUI\ComfyUI_windows_portable_250824\ComfyUI\custom_nodes\comfyui-rmbg\sam3\model\sam3_image.py", line 468, in forward_grounding
out, hs = self._run_decoder(
~~~~~~~~~~~~~~~~~^
memory=out["encoder_hidden_states"],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<5 lines>...
encoder_out=encoder_out,
^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "E:\ComfyUI\ComfyUI_windows_portable_250824\ComfyUI\custom_nodes\comfyui-rmbg\sam3\model\sam3_image.py", line 270, in _run_decoder
self.transformer.decoder(
~~~~~~~~~~~~~~~~~~~~~~~~^
tgt=tgt,
^^^^^^^^
...<10 lines>...
apply_dac=apply_dac,
^^^^^^^^^^^^^^^^^^^^
)
^
File "E:\ComfyUI\ComfyUI_windows_portable_250824\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI_windows_portable_250824\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
File "E:\ComfyUI\ComfyUI_windows_portable_250824\ComfyUI\custom_nodes\comfyui-rmbg\sam3\model\decoder.py", line 520, in forward
memory_mask = self._get_rpb_matrix(
reference_boxes,
(spatial_shapes[0, 0], spatial_shapes[0, 1]),
)
File "E:\ComfyUI\ComfyUI_windows_portable_250824\ComfyUI\custom_nodes\comfyui-rmbg\sam3\model\decoder.py", line 357, in _get_rpb_matrix
deltas_y = coords_h.view(1, -1, 1) - boxes_xyxy.reshape(-1, 1, 4)[:, :, 1:4:2]
~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Prompt executed in 22.06 seconds
Metadata
Metadata
Assignees
Labels
No labels