File tree Expand file tree Collapse file tree 2 files changed +0
-3
lines changed
unittest/_torch/thop/parallel Expand file tree Collapse file tree 2 files changed +0
-3
lines changed Original file line number Diff line number Diff line change @@ -349,5 +349,3 @@ full:H20-3e/accuracy/test_llm_api_pytorch.py::TestNemotronUltra::test_auto_dtype
349349full:H20-3e/accuracy/test_llm_api_pytorch.py::TestKimiK2::test_fp8_blockscale[latency] SKIP (slow I/O)
350350full:H20-3e/test_e2e.py::test_ptp_quickstart_advanced_multi_gpus[DeepSeek-V3-671B-FP8-DeepSeek-V3-0324-8] SKIP (slow I/O)
351351disaggregated/test_disaggregated_single_gpu.py::test_disaggregated_spec_dec_batch_slot_limit[False-False-EAGLE3-LLaMA3.1-Instruct-8B-Llama-3.1-8B-Instruct] SKIP (https://nvbugs/5608743)
352- accuracy/test_disaggregated_serving.py::TestGPTOSS::test_auto_dtype[False] SKIP (https://nvbugs/5624367)
353- accuracy/test_disaggregated_serving.py::TestGPTOSS::test_auto_dtype[True] SKIP (https://nvbugs/5624367)
Original file line number Diff line number Diff line change 66from tensorrt_llm .models .modeling_utils import QuantAlgo , QuantConfig
77
88
9- @pytest .mark .skip (reason = "https://nvbugs/5619396" )
109@skip_blackwell
1110@skip_pre_hopper
1211@pytest .mark .parametrize ("dtype" , [torch .float16 , torch .bfloat16 ])
You can’t perform that action at this time.
0 commit comments