File tree Expand file tree Collapse file tree 2 files changed +1
-2
lines changed
tensorrt_llm/serve/scripts
tests/integration/test_lists Expand file tree Collapse file tree 2 files changed +1
-2
lines changed Original file line number Diff line number Diff line change @@ -1133,7 +1133,7 @@ def sample(
11331133 if len (prompts ) >= num_requests :
11341134 break
11351135 prompt = parser_fn (item )
1136- mm_content = process_image (item ["images" ][0 ])
1136+ mm_content = [ process_image (item ["images" ][0 ])]
11371137 prompt_len = len (tokenizer (prompt ).input_ids )
11381138 if enable_multimodal_chat :
11391139 prompt = self .apply_multimodal_chat_transformation (
Original file line number Diff line number Diff line change @@ -284,7 +284,6 @@ accuracy/test_llm_api_pytorch.py::TestLlama3_2_3B::test_auto_dtype SKIP (https:/
284284test_e2e.py::test_ptp_quickstart_multimodal[NVILA-8B-FP16-vila/NVILA-8B-image-False] SKIP (https://nvbugs/5444060)
285285test_e2e.py::test_ptp_quickstart_multimodal[qwen2.5-vl-7b-instruct-Qwen2.5-VL-7B-Instruct-video-False] SKIP (https://nvbugs/5444060)
286286test_e2e.py::test_ptp_quickstart_multimodal[qwen2.5-vl-7b-instruct-Qwen2.5-VL-7B-Instruct-video-True] SKIP (https://nvbugs/5444060)
287- test_e2e.py::test_trtllm_multimodal_benchmark_serving SKIP (https://nvbugs/5523315)
288287examples/test_llama.py::test_llm_llama_1gpu_fp8_kv_cache[llama-v2-7b-hf-bfloat16] SKIP (https://nvbugs/5527940)
289288accuracy/test_llm_api_pytorch.py::TestDeepSeekR1::test_fp8_blockscale[throughput] SKIP (https://nvbugs/5481198)
290289accuracy/test_llm_api_pytorch.py::TestDeepSeekR1::test_fp8_blockscale_chunked_prefill[latency] SKIP (https://nvbugs/5481198)
You can’t perform that action at this time.
0 commit comments