Skip to content

Commit 3111682

Browse files
authored
[None][infra] Waive failed cases on main 11/05 (#8936)
Signed-off-by: qqiao <[email protected]>
1 parent cc4aa29 commit 3111682

File tree

2 files changed

+10
-1
lines changed

2 files changed

+10
-1
lines changed

tests/integration/test_lists/waives.txt

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -405,7 +405,7 @@ accuracy/test_llm_api_pytorch.py::TestQwen3_8B::test_bf16[multi_gpus_no_cache] S
405405
examples/test_llm_api_with_mpi.py::test_llm_api_single_gpu_with_mpirun[TinyLlama-1.1B-Chat-v1.0] SKIP (https://nvbugs/5606268)
406406
disaggregated/test_disaggregated_single_gpu.py::test_disaggregated_simple_deepseek[True-False-DeepSeek-V3-Lite-fp8/fp8] SKIP (https://nvbugs/5626197)
407407
disaggregated/test_disaggregated_single_gpu.py::test_disaggregated_simple_deepseek[True-True-DeepSeek-V3-Lite-fp8/fp8] SKIP (https://nvbugs/5628952)
408-
accuracy/test_llm_api_pytorch_multimodal.py::TestQwen2_5_VL_7B::test_auto_dtype SKIP
408+
accuracy/test_llm_api_pytorch_multimodal.py::TestQwen2_5_VL_7B::test_auto_dtype SKIP (https://nvbugs/5636894)
409409
accuracy/test_disaggregated_serving.py::TestLlama3_1_8BInstruct::test_auto_dtype[False-False-False] SKIP (https://nvbugs/5629790)
410410
test_e2e.py::test_trtllm_bench_pytorch_backend_sanity[meta-llama/Llama-3.1-8B-llama-3.1-8b-hf-nvfp4-False-False] SKIP (https://nvbugs/5629791)
411411
accuracy/test_disaggregated_serving.py::TestLlama4ScoutInstruct::test_auto_dtype[False] SKIP (https://nvbugs/5629792)
@@ -418,3 +418,11 @@ examples/test_llama.py::test_llama_3_x_fp8_with_bf16_lora[llama-3.1-8b] SKIP (ht
418418
accuracy/test_cli_flow.py::TestLlama3_2_1B::test_fp8 SKIP (https://nvbugs/5629793)
419419
accuracy/test_disaggregated_serving.py::TestLlama3_1_8BInstruct::test_auto_dtype[True-True-True] SKIP (https://nvbugs/5629793)
420420
accuracy/test_llm_api_pytorch.py::TestQwen3_235B_A22B::test_fp8[throughput_latency] SKIP (https://nvbugs/5631036)
421+
test_e2e.py::test_openai_chat_multimodal_example SKIP (https://nvbugs/5636894)
422+
accuracy/test_llm_api_autodeploy.py::TestLlama3_1_8B::test_auto_dtype[False-2] SKIP (https://nvbugs/5636912)
423+
accuracy/test_llm_api_autodeploy.py::TestLlama3_1_8B::test_auto_dtype[False-4] SKIP (https://nvbugs/5636912)
424+
accuracy/test_llm_api_pytorch.py::TestQwen3_235B_A22B::test_nvfp4[latency_moe_trtllm_attention_dp] SKIP (https://nvbugs/5637220)
425+
llmapi/test_llm_examples.py::test_llmapi_example_multilora SKIP (https://nvbugs/5636857)
426+
unittest/_torch/modules SKIP (https://nvbugs/5636986,https://nvbugs/5637012,https://nvbugs/5637037)
427+
accuracy/test_llm_api_pytorch.py::TestGPTOSS::test_eagle3[cutlass] SKIP (https://nvbugs/5636916)
428+
accuracy/test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_bfloat16_4gpus[tp4-attn_backend=TRTLLM-torch_compile=False] SKIP (https://nvbugs/5616182)

tests/unittest/_torch/multi_gpu/test_mnnvl_allreduce.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -164,6 +164,7 @@ def func(input, residual, norm_weight, eps, enable_fusion):
164164
)
165165

166166

167+
@pytest.mark.skip(reason="https://nvbugs/5597647")
167168
@pytest.mark.skipif(torch.cuda.device_count() < 2,
168169
reason="needs 2 GPUs to run this test")
169170
@pytest.mark.parametrize(

0 commit comments

Comments
 (0)