Cuda out of memory after latest pull i've done today. #11938
Wolvensdale
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, just as the title suggest.
I got cuda memory error on my 8GB GPU, eventhough usually its never happen :
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.88 GiB (GPU 0; 8.00 GiB total capacity; 5.79 GiB already allocated; 0 bytes free; 5.96 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
This happen whenever I tried to generate images.
I already use --medvram, and --xformers, so probably something else is happening.|
OS : windows 10, 16 GB RAM
Anyone got the same issue?
Beta Was this translation helpful? Give feedback.
All reactions