is it common that my torch allocated 10.67G when I did nothing but load a sdxl model? #15174
Replies: 5 comments
-
i solve this by adding --medvram-sdxl, but it is still weird because I can run sdxl models without this argument before. |
Beta Was this translation helpful? Give feedback.
-
well i only partly solved it. now my webui will crash when i try to use a lora on it. |
Beta Was this translation helpful? Give feedback.
-
Same problem, after updating webui the RAM usage became crazy, after the second generation I get an error and webui crashes (due to filling all RAM and swap file). This never happened before. |
Beta Was this translation helpful? Give feedback.
-
Same problem. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have 12g memory and I was able to use sdxl models. But after one update I cant load sdxl models. Is this a bug, or I just need a better gpu?
error message here:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 50.00 MiB. GPU 0 has a total capacty of 12.00 GiB of which 0 bytes is free. Of the allocated memory 10.67 GiB is allocated by PyTorch, and 601.21 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Beta Was this translation helpful? Give feedback.
All reactions