-
Notifications
You must be signed in to change notification settings - Fork 389
[Bugfix][PD]fix non-working disaggregated prefill #2374
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to fix a bug in disaggregated prefill for vllm-ascend, which was causing hangs. The change introduces version-specific logic to handle differences in the ModelRunnerOutput
API between vLLM versions. My review found a potential issue with the version checking mechanism, which could lead to crashes on older vLLM versions because it uses an exact version match, which is brittle.
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
Signed-off-by: CaveNightingale <[email protected]>
Signed-off-by: CaveNightingale <[email protected]>
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #2374 +/- ##
=======================================
Coverage ? 76.18%
=======================================
Files ? 120
Lines ? 13532
Branches ? 0
=======================================
Hits ? 10310
Misses ? 3222
Partials ? 0
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
quick merge to recover PD functin. The CI failure doesn't relate to this PR |
What this PR does / why we need it?
Mainline vLLM fixes its disaggregated prefill in vllm-project/vllm#22598 . But it is still not working in vllm-ascend.
To be concrete, decoder instances crash before vllm's fix and hang after vllm's fix in ascend devices.
This patch allows disaggregated prefill to work.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
Qwen3-0.6B 1P1D tp=1 dp=1