Skip to content

[CORE] concurrent partial prefills #2372

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

Csrayz
Copy link

@Csrayz Csrayz commented Aug 14, 2025

What this PR does / why we need it?

When processing a mix of large and small requests, the TTFT of responses is significantly reduc\ed. Please refer to vllm-project/vllm#10235, which achieves the same effect by simply limiting the number of prompt fills for long requests. This solution can be applied to both AscendScheduler (V0) and vLLM Scheduler (V1). Tests show that TTFT can be significantly improved when handling such mixed requests. However, This capability is currently missing when Ascend Scheduler is enabled.

This benchmark used the Qwen3-8B model, with a context length of 128K, running on a single card.

Regarding dataset selection, the sharegpt_clean dataset is used, with its content concatenated and cropped. Small requests with token=50 and medium requests with token=10240 were constructed (there were also large requests with token=102400, but these were ignored because when using the Prefill First scheduling strategy, max_num_batched_tokens will not be set to such a large value). When loading vLLM, set max_num_batched_tokens=22000. This length can accommodate two medium-sized requests and some short requests, reflecting an extreme scenario where the budget is almost entirely occupied by longer requests.

Next, we mix 990 small requests and 100 medium requests into one type of load scenario (hereinafter referred to as 10%), and similarly generate load scenarios with 5% medium requests and 1% load scenarios.

Performance tests were conducted separately for enabling vLLMScheduler, AscendScheduler, and AscendScheduler (long prompt concurrency set to 1). The results of the benchmark are as follows.

PixPin_2025-08-14_15-21-59

python benchmarks/benchmark_serving.py \ 
--host "xx" \ 
--port 80 \ 
--model /model/Qwen3-8B/ \ 
--dataset-name "custom" \ 
--dataset-path ${test_case} \ 
--metric-percentiles 80,85,90,95,99 \ 
--max-concurrency 40 ‍

Csrayz added 3 commits August 14, 2025 10:20
Implement Concurrent Partial Prefills

Signed-off-by: Csrayz <[email protected]>
Signed-off-by: Csrayz <[email protected]>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a mechanism to limit concurrent partial prefills for long prompts in the AscendScheduler, which is a great feature for improving Time To First Token (TTFT) in mixed-load scenarios. The implementation looks solid and correctly follows the logic described. I've found one high-severity issue regarding configuration validation that should be addressed.

Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Modify assert according to code review comments

Signed-off-by: Csrayz <[email protected]>
@github-actions github-actions bot added the documentation Improvements or additions to documentation label Aug 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant