You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I upgraded to version 1.6, but after the update WebUI stopped working, after a lot of work to reinstall everything from scratch I got to the error about lack of Vram, but before the update (like on version 1.4-1.5) everything worked fine up to the size of 712 x 712, but now I get an error when generating 512 x 512, and also the console content is very large and seems to repeat sometimes.
console:
venv "F:\AI\SdWebui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.0-2-g4afaaf8a
Commit hash: 4afaaf8
Launching Web UI with arguments: --xformers --opt-split-attention --no-half
Loading weights [8145104977] from F:\AI\SdWebui\models\Stable-diffusion\Tested working model.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True in launch().
Creating model from config: F:\AI\SdWebui\configs\v1-inference.yaml
Startup time: 7.9s (prepare environment: 1.7s, import torch: 2.6s, import gradio: 0.7s, setup paths: 0.6s, initialize shared: 0.2s, other imports: 0.5s, load scripts: 0.7s, create ui: 0.4s, gradio launch: 0.4s).
loading stable diffusion model: OutOfMemoryError
Traceback (most recent call last):
File "C:\Users\User1\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\User1\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\User1\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "F:\AI\SdWebui\modules\initialize.py", line 147, in load_model
shared.sd_model # noqa: B018
File "F:\AI\SdWebui\modules\shared_items.py", line 110, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "F:\AI\SdWebui\modules\sd_models.py", line 499, in get_sd_model
load_model()
File "F:\AI\SdWebui\modules\sd_models.py", line 626, in load_model
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
File "F:\AI\SdWebui\modules\sd_models.py", line 367, in load_model_weights
model.float()
File "F:\AI\SdWebui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 88, in float
return super().float()
File "F:\AI\SdWebui\venv\lib\site-packages\torch\nn\modules\module.py", line 979, in float
return self._apply(lambda t: t.float() if t.is_floating_point() else t)
File "F:\AI\SdWebui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "F:\AI\SdWebui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "F:\AI\SdWebui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 4 more times]
File "F:\AI\SdWebui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "F:\AI\SdWebui\venv\lib\site-packages\torch\nn\modules\module.py", line 979, in
return self._apply(lambda t: t.float() if t.is_floating_point() else t)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 114.00 MiB (GPU 0; 4.00 GiB total capacity; 2.69 GiB already allocated; 0 bytes free; 2.99 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I upgraded to version 1.6, but after the update WebUI stopped working, after a lot of work to reinstall everything from scratch I got to the error about lack of Vram, but before the update (like on version 1.4-1.5) everything worked fine up to the size of 712 x 712, but now I get an error when generating 512 x 512, and also the console content is very large and seems to repeat sometimes.
console:
venv "F:\AI\SdWebui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.0-2-g4afaaf8a
Commit hash: 4afaaf8
Launching Web UI with arguments: --xformers --opt-split-attention --no-half
Loading weights [8145104977] from F:\AI\SdWebui\models\Stable-diffusion\Tested working model.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
.Creating model from config: F:\AI\SdWebui\configs\v1-inference.yaml
Startup time: 7.9s (prepare environment: 1.7s, import torch: 2.6s, import gradio: 0.7s, setup paths: 0.6s, initialize shared: 0.2s, other imports: 0.5s, load scripts: 0.7s, create ui: 0.4s, gradio launch: 0.4s).
loading stable diffusion model: OutOfMemoryError
Traceback (most recent call last):
File "C:\Users\User1\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\User1\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\User1\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "F:\AI\SdWebui\modules\initialize.py", line 147, in load_model
shared.sd_model # noqa: B018
File "F:\AI\SdWebui\modules\shared_items.py", line 110, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "F:\AI\SdWebui\modules\sd_models.py", line 499, in get_sd_model
load_model()
File "F:\AI\SdWebui\modules\sd_models.py", line 626, in load_model
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
File "F:\AI\SdWebui\modules\sd_models.py", line 367, in load_model_weights
model.float()
File "F:\AI\SdWebui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 88, in float
return super().float()
File "F:\AI\SdWebui\venv\lib\site-packages\torch\nn\modules\module.py", line 979, in float
return self._apply(lambda t: t.float() if t.is_floating_point() else t)
File "F:\AI\SdWebui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "F:\AI\SdWebui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "F:\AI\SdWebui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 4 more times]
File "F:\AI\SdWebui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "F:\AI\SdWebui\venv\lib\site-packages\torch\nn\modules\module.py", line 979, in
return self._apply(lambda t: t.float() if t.is_floating_point() else t)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 114.00 MiB (GPU 0; 4.00 GiB total capacity; 2.69 GiB already allocated; 0 bytes free; 2.99 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Beta Was this translation helpful? Give feedback.
All reactions