Skip to content

Conversation

Potabk
Copy link
Collaborator

@Potabk Potabk commented Sep 30, 2025

What this PR does / why we need it?

Add Disaggregation Prefill for dpsk-v32 doc

Does this PR introduce any user-facing change?

How was this patch tested?

Signed-off-by: wangli <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@github-actions github-actions bot added the documentation Improvements or additions to documentation label Sep 30, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a new documentation page for setting up prefill-decode disaggregation with the DeepSeek v3.2 model. The new tutorial provides step-by-step instructions for environment verification, rank table generation, and service deployment. I've identified a critical syntax error in the prefiller deployment script and a likely configuration omission in the decoder deployment script that would prevent users from successfully following the tutorial. The suggested changes correct these issues.

--trust-remote-code \
--no-enable-prefix-caching \
--gpu-memory-utilization 0.92 \
--additional-config '{"torchair_graph_config":{"enabled":true,"graph_batch_sizes":[16]}}'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The vllm serve command is missing a line continuation character \ at the end of this line. This will cause a shell syntax error and prevent the command from running correctly.

Suggested change
--additional-config '{"torchair_graph_config":{"enabled":true,"graph_batch_sizes":[16]}}'
--additional-config '{"torchair_graph_config":{"enabled":true,"graph_batch_sizes":[16]}}' \

--enable-expert-parallel \
--trust-remote-code \
--no-enable-prefix-caching \
--gpu-memory-utilization 0.9 \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The decoder node configuration is missing the --additional-config to enable torchair graph mode, which is enabled on the prefiller node. For consistency and to ensure correct distributed operation, the decoder should also have graph mode enabled.

Suggested change
--gpu-memory-utilization 0.9 \
--gpu-memory-utilization 0.9 \
--additional-config '{"torchair_graph_config":{"enabled":true,"graph_batch_sizes":[16]}}' \

Signed-off-by: wangli <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant