-
Hi, is there a way to allocate more VRAM when generating images? I have a 3080 and sometimes CUDA runs out of memory at 6GB or less, and I can't really generate images much higher than 1024x1024. I'm not very technically inclined so I have no idea how to interface with anything besides the WebUI :( Any help would be appreciated! |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
you can try |
Beta Was this translation helpful? Give feedback.
-
Works great, thank you! And yes it does take quite a while longer to process. |
Beta Was this translation helpful? Give feedback.
-
Hi there, my problem is similar. I started auto1111 with Nvidia Quadro M4000 (8GB). Torch reserves mainly around 3,6 GB. OK. Now I installed an additional Nvidia Tesla P4 (8GB) and re-installed auto1111. I did not change any arguments after download/install. Unfortunately, nothing has changed. The max available VRAM is still 8GB. I have no visibility which GPU is used. I'm no techie, but a more verbose output would be great to narrow down the issue. Is there a way? P4 usage model: dedicated for compute needs |
Beta Was this translation helpful? Give feedback.
you can try
--medvram --opt-split-attention
or just--medvram
in theset COMMANDLINE_ARGS=
of thewebui-user.bat
, it will be slower, but it is the cost to pay. This helped me (I have a RTX 2060 6GB) to get larger batches and/or higher resolutions. You can also keep an eye on the VRAM consumption of other processes (between windows 10 (dwm+explorer) and Edge, ~500MB of VRAM are reserved).