Skip to content

Commit a97e411

Browse files
authored
[https://nvbugs/5747911][fix] Use offline data path for the unit test of mmencoder server (#10135)
Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>
1 parent f02782a commit a97e411

File tree

2 files changed

+3
-2
lines changed

2 files changed

+3
-2
lines changed

tests/integration/test_lists/waives.txt

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -459,7 +459,6 @@ accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_bfloat16_4gpus[tp2pp2
459459
accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_bfloat16_4gpus[tp4-mtp_nextn=2-attention_dp=True-cuda_graph=True-overlap_scheduler=True-torch_compile=True] SKIP (https://nvbugs/5740075)
460460
accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_nvfp4_4gpus[moe_backend=CUTLASS-mtp_nextn=2-tp2pp2-fp8kv=True-attention_dp=True-cuda_graph=True-overlap_scheduler=True-torch_compile=False] SKIP (https://nvbugs/5740075)
461461
accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_nvfp4_4gpus[moe_backend=CUTLASS-mtp_nextn=2-tp4-fp8kv=True-attention_dp=True-cuda_graph=True-overlap_scheduler=True-torch_compile=False] SKIP (https://nvbugs/5740075)
462-
test_e2e.py::test_openai_mmencoder_example SKIP (https://nvbugs/5747911)
463462
test_e2e.py::test_trtllm_serve_multimodal_example SKIP (https://nvbugs/5747920)
464463
examples/test_whisper.py::test_llm_whisper_general[large-v3-disable_gemm_plugin-enable_attention_plugin-disable_weight_only-float16-nb:1-use_cpp_runtime] SKIP (https://nvbugs/5747930)
465464
test_e2e.py::test_trtllm_serve_example SKIP (https://nvbugs/5747938)

tests/unittest/llmapi/apps/_test_openai_mmencoder.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
import pytest
66
import requests
77
import yaml
8+
from utils.llm_data import llm_models_root
89

910
from ..test_llm import get_model_path
1011
from .openai_server import RemoteMMEncoderServer
@@ -69,7 +70,8 @@ def async_client(server: RemoteMMEncoderServer):
6970
def test_multimodal_content_mm_encoder(client: openai.OpenAI, model_name: str):
7071

7172
content_text = "Describe the natural environment in the image."
72-
image_url = "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore.png"
73+
image_url = str(llm_models_root() / "multimodals" / "test_data" /
74+
"seashore.png")
7375
messages = [{
7476
"role":
7577
"user",

0 commit comments

Comments
 (0)