[Bugfix] Fix hidden_states shape mismatch in AscendDraftModelProposer#7602
[Bugfix] Fix hidden_states shape mismatch in AscendDraftModelProposer#7602Potabk wants to merge 6 commits intovllm-project:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a bug where the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request addresses a bug in SpecDecodeBaseProposer by ensuring the pass_hidden_states_to_model parameter is correctly propagated to its superclass, VllmSpecDecodeBaseProposer. The change involves updating the __init__ call to include this parameter and removing the now-redundant local assignment. This resolves a potential hidden state mismatch issue during speculative decoding.
In line with the repository's contribution guidelines, here is a suggested title and summary for this pull request:
Suggested PR Title:
[SpecDecode][BugFix] Correctly initialize speculator to fix hidden_states mismatchSuggested PR Summary:
### What this PR does / why we need it?
This PR fixes a bug where the `pass_hidden_states_to_model` parameter was not passed to the superclass constructor in `SpecDecodeBaseProposer`. This could cause a mismatch between the draft model's hidden states and the target model's hidden states during speculative decoding, as the superclass was not correctly initialized with this configuration.
The changes are as follows:
- The imported `SpecDecodeBaseProposer` from `vllm` is aliased to `VllmSpecDecodeBaseProposer` to resolve a naming conflict.
- The local `SpecDecodeBaseProposer` now inherits from `VllmSpecDecodeBaseProposer`.
- The `super().__init__` call is updated to correctly pass the `pass_hidden_states_to_model` argument.
- The redundant local assignment of `self.pass_hidden_states_to_model` is removed.
### Does this PR introduce _any_ user-facing change?
No, this is an internal bug fix and does not introduce any user-facing changes.
### How was this patch tested?
CI is expected to pass. The fix ensures correct initialization and behavior for speculative decoding when `pass_hidden_states_to_model` is utilized.|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
ad7469d to
f3ead10
Compare
What this PR does / why we need it?
Running the following script:
There exists a bug:
AscendDraftModelProposer crashed with a shape mismatch error when assigning target hidden states to the draft model's hidden_states buffer:
Root cause:
ascend's SpecDecodeBaseProposer incorrectly inherited from vllm.EagleProposer instead of vllm.SpecDecodeBaseProposer. vllm.EagleProposer.init hardcodes pass_hidden_states_to_model=True when calling super().init, which caused needs_extra_input_slots to be computed as False for all subclasses regardless
of the actual value passed. As a result, AscendDraftModelProposer (which passes pass_hidden_states_to_model=False) was forced into the EAGLE code path in set_inputs_first_pass, where it attempted to copy target hidden states (hidden_size=4096) into a buffer allocated for the draft model (hidden_size=1024).
Does this PR introduce any user-facing change?
How was this patch tested?