Skip to content

[Bugfix]fix ds3.2 dcp mtp#7617

Open
weiguihua2 wants to merge 2 commits intovllm-project:mainfrom
weiguihua2:main
Open

[Bugfix]fix ds3.2 dcp mtp#7617
weiguihua2 wants to merge 2 commits intovllm-project:mainfrom
weiguihua2:main

Conversation

@weiguihua2
Copy link
Collaborator

@weiguihua2 weiguihua2 commented Mar 25, 2026

What this PR does / why we need it?

Fixed the issue where the DCP overlaps the MTP scenario in the ds3.2 scenario.

Does this PR introduce any user-facing change?

No

How was this patch tested?

@github-actions
Copy link
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces several bug fixes aimed at enhancing the stability and correctness of the vllm_ascend framework, particularly concerning distributed processing (DCP, MTP) and context parallel (PCP) operations. The changes address specific edge cases and potential runtime errors related to token alignment, device management for sequence lengths, and robust access to attention metadata, ensuring smoother execution in various distributed inference setups.

Highlights

  • Context Parallel Alignment Optimization: Modified the sparse flash attention context parallel module to conditionally skip token alignment when the parallel context size (pcp_size) is 1, optimizing or correcting behavior for single-node or non-parallel scenarios.
  • Device Compatibility for Sequence Lengths: Ensured that sequence length data (ori_seq_len) is explicitly moved to the CPU device during the proposal phase in the Eagle proposer, preventing potential device-related errors.
  • Robust Metadata Access: Added a safety check to verify the existence of the decode attribute within attn_metadata before attempting to access cp_seq_len, improving robustness against missing metadata in specific configurations.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

What this PR does / why we need it?

This pull request introduces several internal adjustments related to attention mechanisms and speculative decoding. It optimizes the _align_to_graph_bucket_tokens function in sfa_cp.py to avoid unnecessary alignment when pcp_size is 1. Additionally, in eagle_proposer.py, it adds a null check before accessing attn_metadata.decode.cp_seq_len to prevent potential AttributeErrors. However, a critical issue was identified where ori_seq_len is explicitly moved to the CPU, which will likely cause a device mismatch error when this tensor is subsequently used in operations with GPU tensors within self.runner.pcp_manager._get_cp_local_seq_lens. This needs to be corrected to ensure ori_seq_len remains on the GPU.

Does this PR introduce any user-facing change?

No, this PR focuses on internal optimizations and bug fixes that do not introduce any user-facing changes.

How was this patch tested?

Existing unit and integration tests should cover these changes. Additional tests might be needed to specifically verify the fix for the AttributeError and to ensure the device mismatch issue is resolved.

)
num_accept_tokens = query_lens_d.to(self.device) - num_reject_tokens
ori_seq_len = attn_metadata_i.seq_lens[:batch_size].clone()
ori_seq_len = attn_metadata_i.seq_lens[:batch_size].clone().to(device="cpu")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Moving ori_seq_len to the CPU with .to(device="cpu") will likely cause a device mismatch error. This tensor is used in self.runner.pcp_manager._get_cp_local_seq_lens, which performs operations involving tensors created on the default GPU device (e.g., via torch.arange). Performing operations between a CPU tensor and a GPU tensor will lead to a runtime error. ori_seq_len should remain on the GPU.

Suggested change
ori_seq_len = attn_metadata_i.seq_lens[:batch_size].clone().to(device="cpu")
ori_seq_len = attn_metadata_i.seq_lens[:batch_size].clone()

Signed-off-by: weiguihua2 <weiguihua2@huawei.com>
Signed-off-by: weiguihua2 <weiguihua2@huawei.com>
@weiguihua2 weiguihua2 added ready read for review ready-for-test start test by label for PR labels Mar 25, 2026
)
num_accept_tokens = query_lens_d.to(self.device) - num_reject_tokens
ori_seq_len = attn_metadata_i.seq_lens[:batch_size].clone()
ori_seq_len = attn_metadata_i.seq_lens[:batch_size].clone().to(device="cpu")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

plz refactor the mla attention metadata and sfa metadata instead, add a seq_lens_cpu to them to avoid the frequently h2d synchornize

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants