Skip to content

Commit 72fcff1

Browse files
authored
[None][fix] add timeout for llama4 (#8254)
Signed-off-by: Xin He (SW-GPU) <[email protected]>
1 parent d6e315e commit 72fcff1

File tree

2 files changed

+5
-4
lines changed

2 files changed

+5
-4
lines changed

tests/integration/defs/accuracy/test_llm_api_pytorch.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -644,6 +644,7 @@ def test_nvfp4_tp4(self):
644644
extra_evaluator_kwargs=dict(apply_chat_template=True))
645645

646646

647+
@pytest.mark.timeout(14400)
647648
class TestLlama4MaverickInstruct(LlmapiAccuracyTestHarness):
648649
MODEL_NAME = "meta-llama/Llama-4-Maverick-17B-128E-Instruct"
649650
MODEL_PATH = f"{llm_models_root()}/llama4-models/Llama-4-Maverick-17B-128E-Instruct"
@@ -1896,7 +1897,7 @@ def test_guided_decoding_4gpus(self, backend: str, mtp_nextn: int, mocker):
18961897
task.evaluate(llm)
18971898

18981899

1899-
@pytest.mark.timeout(7200)
1900+
@pytest.mark.timeout(14400)
19001901
@pytest.mark.skip_less_device_memory(80000)
19011902
class TestDeepSeekR1(LlmapiAccuracyTestHarness):
19021903
MODEL_NAME = "deepseek-ai/DeepSeek-R1"

tests/integration/test_lists/waives.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -304,9 +304,9 @@ accuracy/test_llm_api_pytorch.py::TestLlama3_2_3B::test_auto_dtype SKIP (https:/
304304
accuracy/test_llm_api_pytorch.py::TestGPTOSS::test_w4_2gpus[ep2-cutlass-auto] SKIP (https://nvbugs/5519530)
305305
accuracy/test_llm_api_pytorch.py::TestGPTOSS::test_w4_2gpus[dp2-cutlass-auto] SKIP (https://nvbugs/5519530)
306306
accuracy/test_llm_api_pytorch.py::TestGPTOSS::test_w4_2gpus[tp2-cutlass-auto] SKIP (https://nvbugs/5519530)
307-
full:H100/accuracy/test_llm_api_pytorch.py::TestLlama4MaverickInstruct::test_fp8[tp8ep8-cuda_graph=True] SKIP (https://nvbugs/5512734)
308-
full:H100/accuracy/test_llm_api_pytorch.py::TestLlama4MaverickInstruct::test_fp8[tp8ep4-cuda_graph=True] SKIP (https://nvbugs/5512734)
309-
full:H100/accuracy/test_llm_api_pytorch.py::TestLlama4MaverickInstruct::test_fp8[tp8-cuda_graph=True] SKIP (https://nvbugs/5512734)
307+
full:H20/accuracy/test_llm_api_pytorch.py::TestLlama4MaverickInstruct::test_fp8[tp8ep8-cuda_graph=True] SKIP (https://nvbugs/5572539)
308+
full:H20/accuracy/test_llm_api_pytorch.py::TestLlama4MaverickInstruct::test_fp8[tp8ep4-cuda_graph=True] SKIP (https://nvbugs/5572539)
309+
full:H20/accuracy/test_llm_api_pytorch.py::TestLlama4MaverickInstruct::test_fp8[tp8-cuda_graph=True] SKIP (https://nvbugs/5572539)
310310
full:A100/test_e2e.py::test_ptp_quickstart_multimodal[NVILA-8B-FP16-vila/NVILA-8B-video-False] SKIP (https://nvbugs/5453725)
311311
test_e2e.py::test_ptp_scaffolding[DeepSeek-R1-Distill-Qwen-7B-DeepSeek-R1/DeepSeek-R1-Distill-Qwen-7B] SKIP (https://nvbugs/5517260)
312312
test_e2e.py::test_ptp_quickstart_multimodal[NVILA-8B-FP16-vila/NVILA-8B-image-False] SKIP (https://nvbugs/5509024)

0 commit comments

Comments
 (0)