Skip to content

Commit 3253ae7

Browse files
authored
[Flaky CI] Increase timeout tolerance for test_mp_crash_detection+test_default_mm_lora_chat_completions (#23028)
Signed-off-by: mgoin <[email protected]>
1 parent 000ccec commit 3253ae7

File tree

2 files changed

+4
-3
lines changed

2 files changed

+4
-3
lines changed

tests/entrypoints/openai/test_default_mm_loras.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,8 @@ def multimodal_server(): # noqa: F811
4848
f"{{\"audio\": \"{AUDIO_LORA_PATH}\"}}",
4949
]
5050

51-
with RemoteOpenAIServer(MULTIMODAL_MODEL_NAME, args) as remote_server:
51+
with RemoteOpenAIServer(MULTIMODAL_MODEL_NAME, args,
52+
max_wait_seconds=480) as remote_server:
5253
yield remote_server
5354

5455

tests/mq_llm_engine/test_error_handling.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -255,8 +255,8 @@ def mock_init():
255255
pass
256256
end = time.perf_counter()
257257

258-
assert end - start < 60, (
259-
"Expected vLLM to gracefully shutdown in <60s "
258+
assert end - start < 100, (
259+
"Expected vLLM to gracefully shutdown in <100s "
260260
"if there is an error in the startup.")
261261

262262

0 commit comments

Comments
 (0)