Replies: 1 comment
-
I don't know if this will help you, but in my environment, the error occurred when I used a specific Sampling method. Perhaps trying different Sampling methods might improve the situation. [DeepL translation] |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
How can i fix this, i am not a programmer or coder i am just a basic user.
Issue : RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
version: f0.0.10-latest-64-g257ac265 • python: 3.10.6 • torch: 2.1.2+cu121 • xformers: N/A • gradio: 3.41.2
Steps to reproduce problem : Unknown
How did it occur ? : generating 3 - 5+ Images causes it to get this error / Using Refiner = Starts at 0.8 also causes it to get this error.
Please help !
To load target model SD1ClipModel
Begin to load 1 model
Model loaded in 3.6s (unload existing model: 0.3s, load weights from disk: 0.1s, forge load real models: 2.5s, load VAE: 0.3s, calculate empty prompt: 0.4s).
To load target model BaseModel
Begin to load 1 model
loading in lowvram mode 64.0
89%|█████████████████████████████████████████████████████████████████████████▏ | 25/28 [01:56<00:14, 4.67s/it]
*** Error completing request█████████████████████████████████████████████████████▌ | 55/60 [02:33<00:29, 5.87s/it]
*** Arguments: ('task(g8mh8nnv7q6eqiv)', <gradio.routes.Request object at 0x000001B0A6CD8640>, '1girl, solo, long hair, looking at viewer, smile, bangs, blue eyes, skirt, long sleeves, ribbon, holding, hair between eyes, very long hair, closed mouth, standing, grey hair, outdoors, japanese clothes, alternate costume, shiny, wide sleeves, kimono, shiny hair, bell, copyright name, floating hair, leaf, hakama, hakama skirt, jingle bell, white kimono, miko, red hakama, autumn leaves, torii, maple leaf', 'EasyNegative, EasyNegativeV2, sketch, duplicate, ugly, huge eyes, text, logo, monochrome, worst face, (bad and mutated hands:1.3), (worst quality:2.0), (low quality:2.0), (blurry:2.0), horror, geometry, bad_prompt, (bad hands), (missing fingers), multiple limbs, bad anatomy, (interlocked fingers:1.2), Ugly Fingers, (extra digit and hands and fingers and legs and arms:1.4), crown braid, ((2girl)), (deformed fingers:1.2), (long fingers:1.2), (bad-artist-anime), bad-artist, bad-hands-5, bad_prompt_version2, lowres, verybadimagenegative_v1.3, zombie, (no negative:0), NG_DeepNegative_V1_75T, bad_prompt_version2, (KHFB, AuroraNegative), an6, negative_hand, negative_hand-neg, negativeXL, FastNegativeV2, unaestheticXLv13, Aissist-neg,', [], 30, 'DDIM', 1, 1, 9, 768, 512, True, 0.45, 2.25, '4x_NMKD-Siax_200k', 30, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, True, 'BunSane.safetensors [628774cbd3]', 0.8, -1, False, -1, 0, 0, 0, UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), False, 1.01, 1.02, 0.99, 0.95, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0.5, 2, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "C:\WebUI Forge's\webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "C:\WebUI Forge's\webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "C:\WebUI Forge's\webui\modules\txt2img.py", line 110, in txt2img
processed = processing.process_images(p)
File "C:\WebUI Forge's\webui\modules\processing.py", line 749, in process_images
res = process_images_inner(p)
File "C:\WebUI Forge's\webui\modules\processing.py", line 920, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "C:\WebUI Forge's\webui\modules\processing.py", line 1291, in sample
return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
File "C:\WebUI Forge's\webui\modules\processing.py", line 1388, in sample_hr_pass
samples = self.sampler.sample_img2img(self, samples, noise, self.hr_c, self.hr_uc, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning)
File "C:\WebUI Forge's\webui\modules\sd_samplers_timesteps.py", line 142, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\WebUI Forge's\webui\modules\sd_samplers_common.py", line 260, in launch_sampling
return func()
File "C:\WebUI Forge's\webui\modules\sd_samplers_timesteps.py", line 142, in
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\WebUI Forge's\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\WebUI Forge's\webui\modules\sd_samplers_timesteps_impl.py", line 24, in ddim
e_t = model(x, timesteps[index].item() * s_in, **extra_args)
File "C:\WebUI Forge's\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\WebUI Forge's\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\WebUI Forge's\webui\modules\sd_samplers_cfg_denoiser.py", line 163, in forward
real_sigma = fake_sigmas[sigma.round().long().clip(0, int(fake_sigmas.shape[0]))]
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
Beta Was this translation helpful? Give feedback.
All reactions