-
-
Notifications
You must be signed in to change notification settings - Fork 550
Description
Issue Description
Attempting to generate a video in SD.Next using Disty0/Wan2.2-T2V-A14B-SDNQ-uint4-svd-r32 (downloaded via SD.Next’s recommended model list).
- When Model Offloading = None (offload=none in Settings → Model Offloading), the run fails almost immediately with the referenced error.
- When Model Offloading = Balanced, the generation completes successfully (though a separate issue occurs, which I’ll report in another bug).
Full log is attached: sdnext-offload-none-2.txt
Additional context: WanAI stage is set to Combined in Model Options. With the default (Low noise only), output quality was extremely poor; Combined improves quality, but it’s still not great and may be related to the separate bug.
I can reproduce this issue at will, so please let me know if there's any additional info I can provide that would help debug it.
Version Platform Description
Fedora 43 Kinoite Linux (kernel 6.17.12-300.fc43.x86_64)
AMD Ryzen AI Max+ 395 (Strix Halo)
AMD Radeon 8060S
128GB RAM + shared with the iGPU
Python 3.13
Browser: Chrome 143.0.7499.169 and Firefox 146.0
SDNext version 56a8aea
FYI, I had to modify rocm.py to get it to run on my system, as described here
$ pip list | grep rocm
pytorch-triton-rocm 3.5.1
rocm 7.11.0a20251205
rocm-sdk-core 7.11.0a20251205
rocm-sdk-devel 7.11.0a20251205
rocm-sdk-libraries-gfx1151 7.11.0a20251205
torch 2.9.1+rocm6.4
torchvision 0.24.1+rocm6.4
Relevant log output
19:37:29-357166 ERROR Processing: args={'prompt': 'colorful tropical fish swim around a stunning coral reef, as a scuba diver swims
through the frame. a shark is visible in the distance.', 'negative_prompt': '', 'generator':
[<torch._C.Generator object at 0x7f7d4c24fed0>], 'callback_on_step_end': <function diffusers_callback at
0x7f7d757a8a40>, 'callback_on_step_end_tensor_inputs': ['latents', 'prompt_embeds', 'negative_prompt_embeds'],
'num_inference_steps': 25, 'num_frames': 17, 'output_type': 'pil', 'width': 832, 'height': 480} Cannot generate
a cpu tensor from a generator of type cuda.
19:37:29-358723 ERROR Processing: ValueError
╭────────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────╮
│/var/home/rmeador/sdnext/modules/processing_diffusers.py:180 in process_base │
│ │
│ 179 │ │ │ taskid = shared.state.begin('Inference') │
│❱ 180 │ │ │ output = shared.sd_model(**base_args) │
│ 181 │ │ │ shared.state.end(taskid) │
│ │
│/var/home/rmeador/sdnext/venv/lib64/python3.13/site-packages/torch/utils/_contextlib.py:120 in decorate_context │
│ │
│ 119 │ │ with ctx_factory(): │
│❱ 120 │ │ │ return func(*args, **kwargs) │
│ 121 │
│ │
│/var/home/rmeador/sdnext/venv/lib64/python3.13/site-packages/diffusers/pipelines/wan/pipeline_wan.py:544 in __call__ │
│ │
│ 543 │ │ ) │
│❱ 544 │ │ latents = self.prepare_latents( │
│ 545 │ │ │ batch_size * num_videos_per_prompt, │
│ │
│/var/home/rmeador/sdnext/venv/lib64/python3.13/site-packages/diffusers/pipelines/wan/pipeline_wan.py:353 in prepare_latents │
│ │
│ 352 │ │ │
│❱ 353 │ │ latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) │
│ 354 │ │ return latents │
│ │
│/var/home/rmeador/sdnext/venv/lib64/python3.13/site-packages/diffusers/utils/torch_utils.py:177 in randn_tensor │
│ │
│ 176 │ │ elif gen_device_type != device.type and gen_device_type == "cuda": │
│❱ 177 │ │ │ raise ValueError(f"Cannot generate a {device} tensor from a generator of type {gen_device_type}.") │
│ 178 │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Cannot generate a cpu tensor from a generator of type cuda.Backend
Diffusers
Compute
AMD ROCm
Interface
ModernUI
Branch
Master
Model
Any Video Model
Acknowledgements
- I have read the above and searched for existing issues
- I confirm that this is classified correctly and its not an extension issue