Skip to content

[CI] Modify some CI test cases to run on L4 environment to reduce H100 resource usage.#1543

Merged
hsliuustc0106 merged 10 commits intovllm-project:mainfrom
yenuo26:nightly
Feb 28, 2026
Merged

[CI] Modify some CI test cases to run on L4 environment to reduce H100 resource usage.#1543
hsliuustc0106 merged 10 commits intovllm-project:mainfrom
yenuo26:nightly

Conversation

@yenuo26
Copy link
Contributor

@yenuo26 yenuo26 commented Feb 27, 2026

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

Modify some CI test cases to run on L4 environment to reduce H100 resource usage.

Test Plan

  1. test benchmark testcase and abort testcase
    /workspace/.venv/bin/python -m pytest -sv tests/benchmarks/test_serve_cli.py tests/engine/test_async_omni_engine_abort.py --html=report.html --self-contained-html

2.test qwen2.5 example testcase
run in ci

Test Result

  1. test benchmark testcase and abort testcase
Result Test Duration Links
Passed tests/benchmarks/test_serve_cli.py::test_bench_serve_chat[omni_server0] 00:02:33  
Passed tests/engine/test_async_omni_engine_abort.py::test_abort 00:00:59  

2.test qwen2.5 example testcase
image

Essential Elements of an Effective PR Description Checklist --- - [ ] The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)". - [ ] The test plan. Please provide the test scripts & test commands. Please state the reasons if your codes don't require additional test scripts. For test file guidelines, please check the [test style doc](https://docs.vllm.ai/projects/vllm-omni/en/latest/contributing/ci/tests_style/) - [ ] The test results. Please paste the results comparison before and after, or the e2e results. - [ ] (Optional) The necessary documentation update, such as updating `supported_models.md` and `examples` for a new model. **Please run `mkdocs serve` to sync the documentation editions to `./docs`.** - [ ] (Optional) Release notes update. If your change is user-facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

yenuo26 and others added 2 commits February 27, 2026 16:26
- Updated the nightly test script to handle multiple pytest commands and capture exit statuses.
- Changed model from "Qwen/Qwen3-Omni-30B-A3B-Instruct" to "Qwen/Qwen2.5-Omni-7B" in benchmark tests.
- Updated stage configuration file for qwen2.5-omni.
- Adjusted prompt in the online serving test to specify a word limit for the answer.

Signed-off-by: yenuo26 <410167048@qq.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 291364629f

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

- Consolidated the Benchmark & Engine Test steps in both test-merge.yml and test-ready.yml.
- Changed the agent queue to "gpu_4_queue" and updated the Docker plugin configuration for better resource management.
- Removed the deprecated stage configuration file for Qwen3 Omni Thinker.

Signed-off-by: yenuo26 <410167048@qq.com>
Signed-off-by: wangyu <53896905+yenuo26@users.noreply.github.com>
@Gaohan123 Gaohan123 added the ready label to trigger buildkite CI label Feb 27, 2026
Signed-off-by: yenuo26 <410167048@qq.com>
Signed-off-by: yenuo26 <410167048@qq.com>
Signed-off-by: yenuo26 <410167048@qq.com>
- Set mm_processor_cache_gb to 0 in qwen2_5_omni_ci.yaml, qwen2_5_omni_multiconnector.yaml, and qwen2_5_omni.yaml.
- Removed skip marker from test_qwen2_5_omni.py to enable the test.

Signed-off-by: yenuo26 <410167048@qq.com>
Copy link
Contributor

@lishunyang12 lishunyang12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a couple comments. The H100 -> L4 migration itself makes sense, but a few things need attention.

engine_output_type: latent
enable_prefix_caching: false
max_num_batched_tokens: 32768
mm_processor_cache_gb: 0
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see #1534 for the reason of the change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw #1534, makes sense for the CI config. But this same change is also added to the production stage configs (qwen2_5_omni.yaml and qwen2_5_omni_multiconnector.yaml) — disables the mm processor cache for all users, not just CI. Was that intentional? If it is only needed to work around an L4 memory constraint, keep it in the CI configs only.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we make accuracy higher priority

engine_output_type: latent
enable_prefix_caching: false
max_num_batched_tokens: 32768
mm_processor_cache_gb: 0
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see #1534 for the reason of the change.


- label: "Benchmark & Engine Test with H100"
timeout_in_minutes: 15
- label: "Benchmark & Engine Test"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The old config had timeout_in_minutes: 15 at the Buildkite level. The inner timeout 15m only kills the bash process — if the Docker pull or container startup hangs, Buildkite will wait forever. Add timeout_in_minutes back.



@pytest.mark.skip(reason="There is a known issue with stream error.")
@pytest.mark.advanced_model
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which fix resolved the stream error? Worth adding a comment or linking the PR in the commit message so this does not get re-skipped later.


models = ["Qwen/Qwen3-Omni-30B-A3B-Instruct"]
stage_configs = [str(Path(__file__).parent.parent / "e2e" / "stage_configs" / "qwen3_omni_ci.yaml")]
models = ["Qwen/Qwen2.5-Omni-7B"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Switching from Qwen3-30B to Qwen2.5-7B means benchmark numbers are no longer comparable across runs. If this test is meant to track perf regressions over time, consider keeping a Qwen3 benchmark on H100 (even if less frequent) alongside this L4 one.

Copy link
Collaborator

@hsliuustc0106 hsliuustc0106 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

engine_output_type: latent
enable_prefix_caching: false
max_num_batched_tokens: 32768
mm_processor_cache_gb: 0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we make accuracy higher priority

@hsliuustc0106 hsliuustc0106 merged commit cd2234a into vllm-project:main Feb 28, 2026
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready label to trigger buildkite CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants