Skip to content

Commit e70961f

Browse files
nv-guomingznv-guomingz
andauthored
test:update waives.txt for nvbug 5219532 (NVIDIA#3672)
Signed-off-by: nv-guomingz <37257613+nv-guomingz@users.noreply.github.com> Co-authored-by: nv-guomingz <37257613+nv-guomingz@users.noreply.github.com>
1 parent 5346f53 commit e70961f

File tree

2 files changed

+2
-5
lines changed

2 files changed

+2
-5
lines changed

tests/integration/defs/examples/test_llama.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4249,7 +4249,7 @@ def test_llm_llama_v3_1_1node_multi_gpus(llama_example_root, llama_model_root,
42494249
mmlu_cmd = generate_mmlu_cmd(example_root=llama_example_root,
42504250
data_dir=mmlu_dataset_root,
42514251
engine_dir=engine_dir,
4252-
tokenizer_dir=llama_model_root,
4252+
hf_model_dir=llama_model_root,
42534253
enable_chunked_prefill=True)
42544254
venv_check_call(llm_venv, mmlu_cmd)
42554255

@@ -4361,7 +4361,7 @@ def test_llm_llama_v3_1_2nodes_8gpus(test_type, llama_example_root,
43614361
mmlu_cmd = generate_mmlu_cmd(example_root=llama_example_root,
43624362
data_dir=mmlu_dataset_root,
43634363
engine_dir=engine_dir,
4364-
tokenizer_dir=llama_model_root,
4364+
hf_model_dir=llama_model_root,
43654365
enable_chunked_prefill=True)
43664366
venv_check_call(llm_venv, mmlu_cmd)
43674367

tests/integration/test_lists/waives.txt

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -423,9 +423,6 @@ accuracy/test_cli_flow.py::TestGpt2Medium::test_fp8_lm_head SKIP (https://nvbugs
423423
examples/test_multimodal.py::test_llm_multimodal_general[VILA1.5-3b-pp:1-tp:1-float16-bs:1-cpp_e2e:False-nb:1] SKIP (https://nvbugs/5214239)
424424
examples/test_multimodal.py::test_llm_fp8_multimodal_general[fp8-fp8-scienceqa-Llama-3.2-11B-Vision-Instruct-pp:1-tp:1-bfloat16-bs:1-cpp_e2e:False] SKIP (https://nvbugs/5222697)
425425
examples/test_gpt.py::test_llm_gpt2_santacoder_1node_4gpus[parallel_build-enable_fmha-enable_gemm_plugin-enable_attention_plugin] SKIP (https://nvbugs/5219531)
426-
examples/test_llama.py::test_llm_llama_v3_1_1node_multi_gpus[enable_gemm_allreduce_plugin-llama-3.1-405b-enable_fp8] SKIP (https://nvbugs/5219532)
427-
examples/test_llama.py::test_llm_llama_v3_1_1node_multi_gpus[enable_gemm_allreduce_plugin-llama-3.1-405b-fp8-disable_fp8] SKIP (https://nvbugs/5219532)
428-
examples/test_llama.py::test_llm_llama_v3_1_1node_multi_gpus[disable_gemm_allreduce_plugin-llama-3.1-70b-enable_fp8] SKIP (https://nvbugs/5219532)
429426
examples/test_llama.py::test_llm_llama_v3_1_1node_multi_gpus[enable_gemm_allreduce_plugin-llama-3.1-70b-disable_fp8] SKIP (https://nvbugs/5219533)
430427
examples/test_medusa.py::test_llama_medusa_1gpu[llama-v2-7b-hf] SKIP (https://nvbugs/5219534)
431428
examples/test_medusa.py::test_llama_medusa_1gpu[llama-3.2-1b] SKIP (https://nvbugs/5219534)

0 commit comments

Comments
 (0)