Skip to content

Conversation

Csrayz
Copy link

@Csrayz Csrayz commented Aug 14, 2025

What this PR does / why we need it?

When processing a mix of large and small requests, the TTFT of responses is significantly reduc\ed. Please refer to vllm-project/vllm#10235, which achieves the same effect by simply limiting the number of prompt fills for long requests. This solution can be applied to both AscendScheduler (V0) and vLLM Scheduler (V1). Tests show that TTFT can be significantly improved when handling such mixed requests. However, This capability is currently missing when Ascend Scheduler is enabled.

This benchmark used the Qwen3-8B model, with a context length of 128K, running on a single card.

Regarding dataset selection, the sharegpt_clean dataset is used, with its content concatenated and cropped. Small requests with token=50 and medium requests with token=10240 were constructed (there were also large requests with token=102400, but these were ignored because when using the Prefill First scheduling strategy, max_num_batched_tokens will not be set to such a large value). When loading vLLM, set max_num_batched_tokens=22000. This length can accommodate two medium-sized requests and some short requests, reflecting an extreme scenario where the budget is almost entirely occupied by longer requests.

Next, we mix 990 small requests and 100 medium requests into one type of load scenario (hereinafter referred to as 10%), and similarly generate load scenarios with 5% medium requests and 1% load scenarios.

Performance tests were conducted separately for enabling vLLMScheduler, AscendScheduler, and AscendScheduler (long prompt concurrency set to 1). The results of the benchmark are as follows.

PixPin_2025-08-14_15-21-59

python benchmarks/benchmark_serving.py \ 
--host "xx" \ 
--port 80 \ 
--model /model/Qwen3-8B/ \ 
--dataset-name "custom" \ 
--dataset-path ${test_case} \ 
--metric-percentiles 80,85,90,95,99 \ 
--max-concurrency 40 ‍

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a mechanism to limit concurrent partial prefills for long prompts in the AscendScheduler, which is a great feature for improving Time To First Token (TTFT) in mixed-load scenarios. The implementation looks solid and correctly follows the logic described. I've found one high-severity issue regarding configuration validation that should be addressed.

Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@github-actions github-actions bot added the documentation Improvements or additions to documentation label Aug 14, 2025
@xueliangyang-oeuler
Copy link

@wangxiyuan 有劳看下,有需要调整的,可以及时修改。

Copy link

@xueliangyang-oeuler xueliangyang-oeuler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work.

Copy link

@frankie-ys frankie-ys left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good!

Copy link

@frankie-ys frankie-ys left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good job

@wangxiyuan
Copy link
Collaborator

Thanks for the PR! can you rebase to main to make CI pass?

Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Csrayz added 6 commits August 25, 2025 11:48
Signed-off-by: Csrayz <[email protected]>
Modify assert according to code review comments

Signed-off-by: Csrayz <[email protected]>
Copy link

codecov bot commented Aug 25, 2025

Codecov Report

❌ Patch coverage is 87.50000% with 5 lines in your changes missing coverage. Please review.
✅ Project coverage is 72.65%. Comparing base (0767d51) to head (2fe1204).
⚠️ Report is 36 commits behind head on main.

Files with missing lines Patch % Lines
vllm_ascend/core/schedule_config.py 68.75% 5 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2372      +/-   ##
==========================================
- Coverage   78.49%   72.65%   -5.84%     
==========================================
  Files         132      147      +15     
  Lines       17806    21845    +4039     
==========================================
+ Hits        13976    15871    +1895     
- Misses       3830     5974    +2144     
Flag Coverage Δ
unittests 72.65% <87.50%> (-5.84%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Csrayz
Copy link
Author

Csrayz commented Aug 25, 2025

pipeline [multicard e2e test (linux-aarch64-a2-2, v0.10.1.1) (pull_request)] Failing after 93m

Error: (VllmWorker TP0 pid=64935) ERROR 08-25 07:26:59 [multiproc_executor.py:559] [ERROR] 2025-08-25-07:26:58 (PID:64935, Device:0, RankID:-1) ERR02200 DIST call hccl api failed.

Run again?

@Csrayz
Copy link
Author

Csrayz commented Aug 27, 2025

pipeline [multicard e2e test (linux-aarch64-a2-2, v0.10.1.1) (pull_request)] Failing after 93m

Error: (VllmWorker TP0 pid=64935) ERROR 08-25 07:26:59 [multiproc_executor.py:559] [ERROR] 2025-08-25-07:26:58 (PID:64935, Device:0, RankID:-1) ERR02200 DIST call hccl api failed.

Run again?

Can this pipeline be rerun? Based on the error, the pipeline failure seems unrelated to code changes. @wangxiyuan

@Csrayz
Copy link
Author

Csrayz commented Aug 29, 2025

This pipeline has various issues, and these non-code-related errors are causing the pipeline to fail. The same code, which previously failed the multi-GPU e2e test, now fails due to the single-GPU e2e failing to start after rerunning the pipeline.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation module:tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants