You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
title: "Stable Diffusion WebUI CUDA Out of Memory Crash"
10
+
category: "memory-problem"
11
+
author: Prequel Community
12
+
description: |
13
+
Detects critical CUDA out of memory errors in Stable Diffusion WebUI that cause image generation failures and application crashes. This occurs when GPU VRAM is exhausted during model loading or image generation, resulting in complete task failure and potential WebUI instability.
14
+
cause: |
15
+
- Insufficient GPU VRAM for requested image resolution or batch size
16
+
- Memory fragmentation preventing large contiguous allocations
17
+
- Model loading exceeding available VRAM capacity
18
+
- Concurrent GPU processes consuming memory
19
+
- High-resolution image generation without memory optimization flags
20
+
impact: |
21
+
- Complete image generation failure
22
+
- WebUI crash requiring restart
23
+
- Loss of in-progress generation work
24
+
- Potential GPU driver instability
25
+
- Service unavailability for users
26
+
tags:
27
+
- memory
28
+
- nvidia
29
+
- crash
30
+
- out-of-memory
31
+
- configuration
32
+
mitigation: |
33
+
IMMEDIATE ACTIONS:
34
+
- Restart Stable Diffusion WebUI
35
+
- Clear GPU memory: nvidia-smi --gpu-reset
36
+
- Add memory optimization flags: --medvram or --lowvram
37
+
CONFIGURATION FIXES:
38
+
- For 4-6GB VRAM: Add --medvram to webui-user.bat
39
+
- For 2-4GB VRAM: Add --lowvram to webui-user.bat
40
+
- Enable xformers: --xformers for memory efficiency
41
+
- Add --always-batch-cond-uncond for batch processing
42
+
RUNTIME ADJUSTMENTS:
43
+
- Reduce image resolution (512x512 instead of 1024x1024)
44
+
- Decrease batch size to 1
45
+
- Lower batch count for multiple generations
46
+
- Set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.9,max_split_size_mb:512
47
+
PREVENTION:
48
+
- Monitor GPU memory usage with nvidia-smi
49
+
- Implement gradual resolution scaling
50
+
- Use cloud services for high-resolution generation
2025-08-29 14:23:49.012 [ERROR] torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 6.00 GiB total capacity; 4.50 GiB already allocated; 1.20 GiB free; 4.80 GiB reserved in total by PyTorch)
5
+
2025-08-29 14:23:49.013 [ERROR] RuntimeError: CUDA out of memory. Tried to allocate 2.00 GiB. GPU 0 has a total capacity of 6.00 GiB of which 1.20 GiB is free. Process 12345 has 4.50 GiB memory in use.
6
+
2025-08-29 14:23:49.014 [CRITICAL] Stable Diffusion model failed to load: OutOfMemoryError
7
+
2025-08-29 14:23:49.015 [ERROR] CUDA error: out of memory
8
+
2025-08-29 14:23:49.016 [ERROR] GPU 0 has a total capacity of 6.00 GiB of which 1.20 GiB is free. Allocation failed.
9
+
2025-08-29 14:23:49.017 [ERROR] Failed to generate image: CUDA out of memory
10
+
2025-08-29 14:23:49.018 [INFO] Attempting to clear cache...
0 commit comments