Replies: 1 comment 2 replies
-
I am using RTX2070 with 8gb VRAM but always end up error saying RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.53 GiB already allocated; 0 bytes free; 6.58 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF any suggestions? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
While there are a wide variety of GPUs available is there any specific GPU architecture required to run localGPT?
I see CUDA here so can it run GPUs other than NVIDIA? If not how can we make a platform independent LLM system?
Beta Was this translation helpful? Give feedback.
All reactions