Replies: 4 comments 7 replies
-
Probably OOM. |
Beta Was this translation helpful? Give feedback.
-
It seems like you're running it out of a OneDrive directory. I'm not sure exactly how OneDrive directories work but I'd recommend moving your installation into a normal folder to see if that fixes the issue. This is definitely not an OOM as Cyberes suggests. PS. I recommend the test prompt "Edward Snowden" |
Beta Was this translation helpful? Give feedback.
-
Hi again guys. Today my automatic1111 won't load. Sorry to bother you but if anyone could help, that would be great. ####################################################################################################### Launching Web UI with arguments: --no-half
|
Beta Was this translation helpful? Give feedback.
-
Hi guys, yesterday everything was working perfectly but today i was greeted with a new error. I tried everything I found online to not bother anyone here but as you imagine, nothing worked for me. If someone could help, please let me know. Below is the error displayed at the bottom following by press any key to continue... and the window closes as soon as I hit any key. Thanks for your help. ModuleNotFoundError: No module named 'huggingface_hub.utils._typing' |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi! Maybe someone could help me. Everything was working and but I had the dumbest idea to try adding more models and as a result I Fu*** Up. I deleted everything and started from scratch but had no luck. I tried to find a similar problem trying not to bother but nothing I found helped. The error is:
Error completing request
Arguments: ('planet earth', '', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 768, 768, False, 0.7, 2, 'Latent', 0, 0, 0, 0, False, False, False, False, '', 1, '', 0, '', True, False, False) {}
Traceback (most recent call last):
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\modules\call_queue.py", line 45, in f
res = list(func(*args, **kwargs))
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\modules\call_queue.py", line 28, in f
res = func(*args, **kwargs)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\modules\txt2img.py", line 52, in txt2img
processed = process_images(p)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\modules\processing.py", line 479, in process_images
res = process_images_inner(p)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\modules\processing.py", line 608, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\modules\processing.py", line 797, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\modules\sd_samplers.py", line 537, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\modules\sd_samplers.py", line 440, in launch_sampling
return func()
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\modules\sd_samplers.py", line 537, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\modules\sd_samplers.py", line 338, in forward
x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]})
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 167, in forward
return self.get_v(input * c_in, self.sigma_to_t(sigma), **kwargs) * c_out + input * c_skip
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 177, in get_v
return self.inner_model.apply_model(x, t, cond)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 781, in forward
h = module(h, emb, context)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 82, in forward
x = layer(x, emb)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\modules\sd_hijack_checkpoint.py", line 10, in ResBlock_forward
return checkpoint(self._forward, x, emb)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\venv\lib\site-packages\torch\utils\checkpoint.py", line 235, in checkpoint
return CheckpointFunction.apply(function, preserve, *args)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\venv\lib\site-packages\torch\utils\checkpoint.py", line 96, in forward
outputs = run_function(*args)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 262, in _forward
h = self.in_layers(x)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 139, in forward
input = module(input)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 457, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\Viral\OneDrive\Documents\GitHub\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 453, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
You can try to repro this exception using the following code snippet. If that doesn't trigger the error, please include your original repro script when reporting this issue.
import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.allow_tf32 = True
data = torch.randn([2, 1280, 48, 48], dtype=torch.half, device='cuda', requires_grad=True)
net = torch.nn.Conv2d(1280, 640, kernel_size=[3, 3], padding=[1, 1], stride=[1, 1], dilation=[1, 1], groups=1)
net = net.cuda().half()
out = net(data)
out.backward(torch.randn_like(out))
torch.cuda.synchronize()
ConvolutionParams
memory_format = Contiguous
data_type = CUDNN_DATA_HALF
padding = [1, 1, 0]
stride = [1, 1, 0]
dilation = [1, 1, 0]
groups = 1
deterministic = false
allow_tf32 = true
input: TensorDescriptor 000002385058FDF0
type = CUDNN_DATA_HALF
nbDims = 4
dimA = 2, 1280, 48, 48,
strideA = 2949120, 2304, 48, 1,
output: TensorDescriptor 0000023850590020
type = CUDNN_DATA_HALF
nbDims = 4
dimA = 2, 640, 48, 48,
strideA = 1474560, 2304, 48, 1,
weight: FilterDescriptor 0000023856CEF110
type = CUDNN_DATA_HALF
tensor_format = CUDNN_TENSOR_NCHW
nbDims = 4
dimA = 640, 1280, 3, 3,
Pointer addresses:
input: 0000000C03080000
output: 0000000BA1000000
weight: 0000000B6B520000
Beta Was this translation helpful? Give feedback.
All reactions