Maximize GPU Memory Usage on Headless System (GTX 750 Ti) #2771
Unanswered
megvadulthangya
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello lllyasviel & Devs!
I’m using Stable Diffusion WebUI Forge on a headless system with a GTX 750 Ti (4GB VRAM), and I’ve run into a challenge regarding GPU memory usage. In the GUI, the maximum GPU weight I can set is 4032 MB, but it seems like there is still some unused VRAM, and I would like to make sure that the program utilizes all of the available 4 GB of GPU memory, especially on my headless setup where I have 32 GB of system RAM and no need to reserve unused memory.
After a normal startup, the system only reserves 8 MB of GPU memory, which leaves a significant amount of unused VRAM. However, I’m still getting the following low VRAM warning during diffusion iterations:
[Low GPU VRAM Warning] Your current GPU free memory is 885.42 MB for this diffusion iteration.
[Low GPU VRAM Warning] This number is lower than the safe value of 1536.00 MB.
[Low GPU VRAM Warning] If you continue, you may cause NVIDIA GPU performance degradation for this diffusion process, and the speed may be extremely slow (about 10x slower).
[Low GPU VRAM Warning] To solve the problem, you can set the 'GPU Weights' (on the top of page) to a lower value.
[Low GPU VRAM Warning] If you cannot find 'GPU Weights', you can click the 'all' option in the 'UI' area on the left-top corner of the webpage.
[Low GPU VRAM Warning] If you want to take the risk of NVIDIA GPU fallback and test the 10x slower speed, you can (but are highly not recommended to) add '--disable-gpu-warning' to CMD flags to remove this warning.
It appears that the program might not be fully utilizing the available 4 GB VRAM, and I’m not sure whether this is properly accounted for in the memory management, especially since the warning seems to be optimized for non-headless systems. I’d like to force the program to use the full GPU memory on my headless system.
Is there a way to adjust the memory management or a specific flag/configuration that would allow full VRAM usage, or force to use all somehow...?
Thanks in advance for your help!
Beta Was this translation helpful? Give feedback.
All reactions