You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have two identical systems from Dell, both have RTX 3060 Ti 8GB gpu in them.
One of them have windows 11, the other have PopOS Linux (Ubuntu based). They are both running the same github clone of AUTOMATIC1111/stable-diffusion-webui They both are able to create txt2img prompts at similar speeds and quality. However, the linux machine keeps running into error when upscaling using LDSR.
Any idea why this could be?
CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 7.79 GiB total capacity; 5.63 GiB already allocated; 383.88 MiB free; 5.82 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I have also tried to set max_split_size_mb to 128, but still the same issue persists; as suggested in other similar threads.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I have two identical systems from Dell, both have RTX 3060 Ti 8GB gpu in them.
One of them have windows 11, the other have PopOS Linux (Ubuntu based). They are both running the same github clone of AUTOMATIC1111/stable-diffusion-webui They both are able to create txt2img prompts at similar speeds and quality. However, the linux machine keeps running into error when upscaling using LDSR.
Any idea why this could be?
CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 7.79 GiB total capacity; 5.63 GiB already allocated; 383.88 MiB free; 5.82 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I have also tried to set max_split_size_mb to 128, but still the same issue persists; as suggested in other similar threads.
Beta Was this translation helpful? Give feedback.
All reactions