Commit 3f606c5
committed
Fix vLLM CI test by increasing gpu_memory_utilization to 0.4
The CI test was failing with 'ValueError: To serve at least one request
with the model's max seq len (8192), 1.5 GiB KV cache is needed, which
is larger than the available KV cache memory (1.42 GiB).'
Root cause:
- Tesla T4 GPU (15.36 GB) in CI environment
- With gpu_memory_utilization=0.35, only 1.42 GiB available for KV cache
- Required 1.5 GiB for max_seq_len=8192
- Shortfall: 80 MB
Fix:
- Increase gpu_memory_utilization from 0.35 to 0.4
- Now provides ~1.62 GiB for KV cache (sufficient for 1.5 GiB requirement)
- Does not affect model outputs with temperature=0.0 (deterministic)1 parent f54496a commit 3f606c5
1 file changed
+1
-1
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
5 | 5 | | |
6 | 6 | | |
7 | 7 | | |
8 | | - | |
| 8 | + | |
9 | 9 | | |
10 | 10 | | |
11 | 11 | | |
| |||
0 commit comments