[BugFix][Core] Fix a bug running multi-modal with ascend_scheduler#3675
[BugFix][Core] Fix a bug running multi-modal with ascend_scheduler#3675wangxiyuan merged 1 commit intovllm-project:mainfrom
Conversation
There was a problem hiding this comment.
Code Review
This PR fixes a bug when running multi-modal models with AscendScheduler. The fix involves renaming configuration parameters in AscendSchedulerConfig to avoid conflicts with the base SchedulerConfig from vLLM. This is a good approach. The implementation looks correct, but I found that one of the new parameter names is misleading, which could lead to confusion and incorrect usage. I've provided a suggestion to improve the naming for better clarity and maintainability.
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
|
@Csrayz can you take a look at this change? |
86263c5 to
0349604
Compare
Today I found that the meaning of these two parameters is aligned with vLLM. The problem lies in default values. We should just align the default valuse with vLLM. cc @Csrayz @wangxiyuan |
95cd053 to
ffee8b2
Compare
Signed-off-by: hw_whx <wanghexiang7@huawei.com>
79ab771 to
2148652
Compare
…llm-project#3675) This PR fix the bug related with running multi-modal models with AscendScheduler. This bug was introduced by PR vllm-project#2372 by using the same parameter names as vLLM with different default values. Currently I fix this bug by changing the default values of these two parameters to align with vLLM. - vLLM version: v0.11.0rc3 - vLLM main: vllm-project/vllm@17c540a Signed-off-by: hw_whx <wanghexiang7@huawei.com> Co-authored-by: hw_whx <wanghexiang7@huawei.com> Signed-off-by: luolun <luolun1995@cmbchina.com>
…llm-project#3675) This PR fix the bug related with running multi-modal models with AscendScheduler. This bug was introduced by PR vllm-project#2372 by using the same parameter names as vLLM with different default values. Currently I fix this bug by changing the default values of these two parameters to align with vLLM. - vLLM version: v0.11.0rc3 - vLLM main: vllm-project/vllm@17c540a Signed-off-by: hw_whx <wanghexiang7@huawei.com> Co-authored-by: hw_whx <wanghexiang7@huawei.com> Signed-off-by: hwhaokun <haokun0405@163.com>
…llm-project#3675) This PR fix the bug related with running multi-modal models with AscendScheduler. This bug was introduced by PR vllm-project#2372 by using the same parameter names as vLLM with different default values. Currently I fix this bug by changing the default values of these two parameters to align with vLLM. - vLLM version: v0.11.0rc3 - vLLM main: vllm-project/vllm@17c540a Signed-off-by: hw_whx <wanghexiang7@huawei.com> Co-authored-by: hw_whx <wanghexiang7@huawei.com> Signed-off-by: nsdie <yeyifan@huawei.com>
…llm-project#3675) This PR fix the bug related with running multi-modal models with AscendScheduler. This bug was introduced by PR vllm-project#2372 by using the same parameter names as vLLM with different default values. Currently I fix this bug by changing the default values of these two parameters to align with vLLM. - vLLM version: v0.11.0rc3 - vLLM main: vllm-project/vllm@17c540a Signed-off-by: hw_whx <wanghexiang7@huawei.com> Co-authored-by: hw_whx <wanghexiang7@huawei.com>
This PR fix the bug related with running multi-modal models with AscendScheduler. This bug was introduced by PR #2372 by using the same parameter names as vLLM with different default values. The error is as following:
Details
Currently I fix this bug by changing the default values of these two parameters to align with vLLM. Please take a look: @Csrayz @frankie-ys @xueliangyang-oeuler @wangxiyuan