[bugfix]fixed block_size incorrect setting issue in dsv3.2#7630
[bugfix]fixed block_size incorrect setting issue in dsv3.2#7630MengqingCao merged 5 commits intovllm-project:mainfrom
Conversation
Signed-off-by: Wang Kunpeng <1289706727@qq.com>
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a bug related to incorrect Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request refactors the block size update mechanism across vllm_ascend/platform.py and vllm_ascend/utils.py. The update_block_size_for_backend method in platform.py has been simplified to a pass statement with a TODO, moving its logic elsewhere. The refresh_block_size function in utils.py now includes specific handling for hybrid models and revised conditions for setting the block size to 128. However, a critical issue has been identified: the refactoring has removed the check for user_specified_block_size from the block size determination process. This omission could lead to user-defined block sizes being unintentionally overridden, causing unexpected behavior, and indicates an incomplete centralization of block size selection logic.
| # TODO: NPU still sets block_size in check_and_update_config. | ||
| # Move that logic here so block_size is chosen by the backend. | ||
| pass |
There was a problem hiding this comment.
The update_block_size_for_backend method has been refactored to a pass statement with a TODO. While the is_hybrid model logic has been correctly moved to refresh_block_size (which is called by check_and_update_config), the critical check for cache_config.user_specified_block_size has been removed from this call path. This omission means that user-defined block sizes might be unintentionally overridden, leading to unexpected behavior. The TODO also highlights that the block size selection logic is not yet fully centralized, indicating an incomplete refactoring.
| if model_config.is_hybrid: | ||
| # Hybrid attention+mamba models rely on the model-specific sizing | ||
| # logic rather than the generic platform default. | ||
| return |
There was a problem hiding this comment.
The refresh_block_size function now correctly handles hybrid models. However, it currently does not respect cache_config.user_specified_block_size. If a user explicitly sets --block-size, this function might still override it to 128 if enable_prefix_caching or enable_chunked_prefill is enabled. This can lead to unexpected behavior and should be addressed to ensure user configuration is prioritized. Please reintroduce the check for user_specified_block_size before applying default logic.
if cache_config.user_specified_block_size:
# User specified --block-size; keep it.
return
if model_config.is_hybrid:
# Hybrid attention+mamba models rely on the model-specific sizing
# logic rather than the generic platform default.
returnSigned-off-by: Wang Kunpeng <1289706727@qq.com>
Signed-off-by: Wang Kunpeng <1289706727@qq.com>
…to bugfix_block_size
MengqingCao
left a comment
There was a problem hiding this comment.
LGTM, thx for this fix!
…ect#7630) ### What this PR does / why we need it? vllm-project/vllm#35122 This PR in the vllm community refactors the update mode of block_size. As a result, when the user does not specify `--block-size`, dsv3.2 obtains an incorrect block_size. **The root cause of the problem is analyzed from the block_size update process as follows:** 1. In NPUPlatform, `check_and_update_config` calls `refresh_block_size` to set block_size to 128. 2. During Modelrunner initialization, the `self.block_size` parameter is generated. At this time, block_size is still 128. This parameter will be used for operations such as kvcache initialization. 3. `update_block_size_for_backend` updates block_size to the size set in attn_backend. The reason why the DSV3.2 is faulty is that it has an additional attn_backend `DeepseekV32IndexerBackend`, and this backend is not rewritten. The block_size obtained from attn_backend is 64. In this case, only `vllm_config.cache_config.block_size` is updated, and other parts are not modified. As a result, the block_size on the entire network is inconsistent. **Modification solution:** Skip `update_block_size_for_backend` and modify block_size only in the `check_and_update_config` method. In the future, the block_size update logic can be migrated to the `update_block_size_for_backend` method. Ensure that all block_size values on the entire network are updated. ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? - vLLM version: v0.18.0 - vLLM main: vllm-project/vllm@ed359c4 --------- Signed-off-by: Wang Kunpeng <1289706727@qq.com>
…7630) (#7652) ### What this PR does / why we need it? vllm-project/vllm#35122 This PR in the vllm community refactors the update mode of block_size. As a result, when the user does not specify `--block-size`, dsv3.2 obtains an incorrect block_size. **The root cause of the problem is analyzed from the block_size update process as follows:** 1. In NPUPlatform, `check_and_update_config` calls `refresh_block_size` to set block_size to 128. 2. During Modelrunner initialization, the `self.block_size` parameter is generated. At this time, block_size is still 128. This parameter will be used for operations such as kvcache initialization. 3. `update_block_size_for_backend` updates block_size to the size set in attn_backend. The reason why the DSV3.2 is faulty is that it has an additional attn_backend `DeepseekV32IndexerBackend`, and this backend is not rewritten. The block_size obtained from attn_backend is 64. In this case, only `vllm_config.cache_config.block_size` is updated, and other parts are not modified. As a result, the block_size on the entire network is inconsistent. **Modification solution:** Skip `update_block_size_for_backend` and modify block_size only in the `check_and_update_config` method. In the future, the block_size update logic can be migrated to the `update_block_size_for_backend` method. Ensure that all block_size values on the entire network are updated. ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? - vLLM version: v0.18.0 - vLLM main: vllm-project/vllm@ed359c4 --------- <!-- Thanks for sending a pull request! BEFORE SUBMITTING, PLEASE READ https://docs.vllm.ai/en/latest/contributing/overview.html --> ### What this PR does / why we need it? <!-- - Please clarify what changes you are proposing. The purpose of this section is to outline the changes and how this PR fixes the issue. If possible, please consider writing useful notes for better and faster reviews in your PR. - Please clarify why the changes are needed. For instance, the use case and bug description. - Fixes # --> ### Does this PR introduce _any_ user-facing change? <!-- Note that it means *any* user-facing change including all aspects such as API, interface or other behavior changes. Documentation-only updates are not considered user-facing changes. --> ### How was this patch tested? <!-- CI passed with new added/existing test. If it was tested in a way different from regular unit tests, please clarify how you tested step by step, ideally copy and paste-able, so that other reviewers can test and check, and descendants can verify in the future. If tests were not added, please describe why they were not added and/or why it was difficult to add. --> Signed-off-by: Wang Kunpeng <1289706727@qq.com>
What this PR does / why we need it?
vllm-project/vllm#35122 This PR in the vllm community refactors the update mode of block_size. As a result, when the user does not specify
--block-size, dsv3.2 obtains an incorrect block_size.The root cause of the problem is analyzed from the block_size update process as follows:
check_and_update_configcallsrefresh_block_sizeto set block_size to 128.self.block_sizeparameter is generated. At this time, block_size is still 128. This parameter will be used for operations such as kvcache initialization.update_block_size_for_backendupdates block_size to the size set in attn_backend. The reason why the DSV3.2 is faulty is that it has an additional attn_backendDeepseekV32IndexerBackend, and this backend is not rewritten. The block_size obtained from attn_backend is 64. In this case, onlyvllm_config.cache_config.block_sizeis updated, and other parts are not modified. As a result, the block_size on the entire network is inconsistent.Modification solution:
Skip
update_block_size_for_backendand modify block_size only in thecheck_and_update_configmethod.In the future, the block_size update logic can be migrated to the
update_block_size_for_backendmethod. Ensure that all block_size values on the entire network are updated.Does this PR introduce any user-facing change?
no
How was this patch tested?