Local Play: CUDA out of memory. #186
Unanswered
Koakuma-spec
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
This error does not always happen, sometimes when I talk to my AI it responds just fine. Other times...
Full error summary when I tried to give a prompt to my AI:
RuntimeError: CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 2.00 GiB total capacity; 1.23 GiB already allocated; 0 bytes free; 1.34 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I'm new to this program so I'm not exactly sure what to do to avoid this. I can't find a documentation here that helps me with my issue and I don't know what PYTORCH_CUDA_ALLOC_CONF is. All I know is that my computer can sometimes run out of memory, so what are some settings I can manually adjust to prevent this from happening?
I am using 2.7B Nerys with 3 layers and 0 disk cache (this is the max my computer can handle, I've done a lotta trial and error to figure this out)
Beta Was this translation helpful? Give feedback.
All reactions