You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running Stable Diffusion / Automatic1111 WebUI Locally on RTX 3060 (12 GB VRAM). Whenever I try to train the model whether its 2.1 or 1.5 (512 or 768) every time I run in to problem of "OOM" Error and "Cuda Out of Memory".
Current Batch Size = 1
Tried reducing image size and still same.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I am running Stable Diffusion / Automatic1111 WebUI Locally on RTX 3060 (12 GB VRAM). Whenever I try to train the model whether its 2.1 or 1.5 (512 or 768) every time I run in to problem of "OOM" Error and "Cuda Out of Memory".
Current Batch Size = 1
Tried reducing image size and still same.
Any help would be appreciated, please
Beta Was this translation helpful? Give feedback.
All reactions