File tree Expand file tree Collapse file tree 3 files changed +4
-6
lines changed
tests/integration/test_lists Expand file tree Collapse file tree 3 files changed +4
-6
lines changed Original file line number Diff line number Diff line change @@ -189,9 +189,9 @@ l0_dgx_h100:
189189 # ------------- CPP tests ---------------
190190 - cpp/test_multi_gpu.py::test_mpi_utils[90]
191191 - cpp/test_multi_gpu.py::test_fused_gemm_allreduce[4proc-90]
192- - cpp/test_multi_gpu.py::test_cache_transceiver[2proc-ucx_kvcache-90] ISOLATION
193- - cpp/test_multi_gpu.py::test_cache_transceiver[8proc-nixl_kvcache-90] ISOLATION
194- - cpp/test_multi_gpu.py::test_cache_transceiver[8proc-ucx_kvcache-90] ISOLATION
192+ - cpp/test_multi_gpu.py::test_cache_transceiver[2proc-ucx_kvcache-90]
193+ - cpp/test_multi_gpu.py::test_cache_transceiver[8proc-nixl_kvcache-90]
194+ - cpp/test_multi_gpu.py::test_cache_transceiver[8proc-ucx_kvcache-90]
195195 - cpp/test_multi_gpu.py::test_user_buffer[2proc-90]
196196 - cpp/test_multi_gpu.py::test_enc_dec[t5-90]
197197 - cpp/test_multi_gpu.py::test_llama_executor[llama-orchestrator-90]
Original file line number Diff line number Diff line change @@ -65,7 +65,7 @@ l0_l40s:
6565 - llmapi/test_llm_examples.py::test_llmapi_example_multilora
6666 - llmapi/test_llm_examples.py::test_llmapi_example_guided_decoding
6767 - llmapi/test_llm_examples.py::test_llmapi_example_logits_processor
68- - examples/test_llm_api_with_mpi.py::test_llm_api_single_gpu_with_mpirun[TinyLlama-1.1B-Chat-v1.0] ISOLATION
68+ - examples/test_llm_api_with_mpi.py::test_llm_api_single_gpu_with_mpirun[TinyLlama-1.1B-Chat-v1.0]
6969- condition :
7070 ranges :
7171 system_gpu_count :
Original file line number Diff line number Diff line change @@ -380,7 +380,5 @@ test_e2e.py::test_ptp_quickstart_multimodal[qwen2-vl-7b-instruct-Qwen2-VL-7B-Ins
380380accuracy/test_cli_flow.py::TestMixtral8x7B::test_fp4_plugin SKIP (https://nvbugs/5451207)
381381accuracy/test_cli_flow.py::TestMixtral8x22B::test_fp8_tp2pp2 SKIP (https://nvbugs/5511944)
382382examples/test_llama.py::test_llm_llama_1gpu_fp4[llama-3.1-70b-instruct-enable_norm_quant_fusion-enable_fused_quant-fp4_plugin-bfloat16] SKIP (https://nvbugs/5543383)
383- cpp/test_multi_gpu.py::test_cache_transceiver[8proc-nixl_kvcache-90] SKIP (https://nvbugs/5492250)
384383triton_server/test_triton.py::test_gpt_2b_ib_lora[gpt-2b-ib-lora] SKIP (https://nvbugs/5470830)
385384accuracy/test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_guided_decoding_4gpus[llguidance] SKIP (https://nvbugs/5594703)
386- cpp/test_multi_gpu.py::test_cache_transceiver[2proc-ucx_kvcache-90] SKIP (https://nvbugs/5492250)
You can’t perform that action at this time.
0 commit comments