-
-
Notifications
You must be signed in to change notification settings - Fork 9.4k
[Config] add "qwen" as a native eagle3 target supported model #22333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds 'qwen' to the list of natively supported target models for Eagle3 speculative decoding. The change simplifies the configuration logic by removing a conditional check that previously enabled Qwen support only when a SpeculatorsConfig
was used. The updated code is more direct and improves maintainability. Based on the provided description and test results, this change appears correct and aligns with the goal of broader model support for Eagle3. I have no major concerns with this change.
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Signed-off-by: lechen <[email protected]>
Could you please share the command used for testing? |
Signed-off-by: LeChen <[email protected]>
Sure! pytest test_spec_decode.py -v -k qwen3_eagle3 The acceptance_rate and related metrics were obtained by calling try:
metrics = spec_llm.get_metrics()
except AssertionError:
print("Metrics are not supported in the V0 engine.")
return
total_num_output_tokens = sum(
len(output.outputs[0].token_ids) for output in spec_outputs
)
num_drafts = 0
num_draft_tokens = 0
num_accepted_tokens = 0
acceptance_counts = [0] * 3
for metric in metrics:
if metric.name == "vllm:spec_decode_num_drafts":
assert isinstance(metric, Counter)
num_drafts += metric.value
elif metric.name == "vllm:spec_decode_num_draft_tokens":
assert isinstance(metric, Counter)
num_draft_tokens += metric.value
elif metric.name == "vllm:spec_decode_num_accepted_tokens":
assert isinstance(metric, Counter)
num_accepted_tokens += metric.value
elif metric.name == "vllm:spec_decode_num_accepted_tokens_per_pos":
assert isinstance(metric, Vector)
for pos in range(len(metric.values)):
acceptance_counts[pos] += metric.values[pos]
print("-" * 50)
print(f"total_num_output_tokens: {total_num_output_tokens}")
print(f"num_drafts: {num_drafts}")
print(f"num_draft_tokens: {num_draft_tokens}")
print(f"num_accepted_tokens: {num_accepted_tokens}")
acceptance_length = 1 + (num_accepted_tokens / num_drafts) if num_drafts > 0 else 1
print(f"mean acceptance length: {acceptance_length:.2f}")
print("-" * 50)
# print acceptance at each token position
for i in range(len(acceptance_counts)):
acceptance_rate = acceptance_counts[i] / num_drafts if num_drafts > 0 else 0
print(f"acceptance at token {i}: {acceptance_rate:.2f}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding the test, LGTM as long as the relevant tests pass. (Please merge from main to resolve some of the unrelated errors)
Can you make sure |
Sorry, I am not familiar with the process. I will fix it. |
Signed-off-by: LeChen <[email protected]>
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: LeChen <[email protected]>
Head branch was pushed to by a user without write access
Signed-off-by: LeChen <[email protected]>
Signed-off-by: LeChen <[email protected]>
Sorry there is merge conflict again, can you resolve it? |
Looks like https://buildkite.com/vllm/ci/builds/26496/steps/canvas?sid=0198934d-abd0-4623-ac97-7ab5f3be8cfc is caused by this PR, can you fix it? |
…roject#22333) Signed-off-by: lechen <[email protected]> Signed-off-by: LeChen <[email protected]>
Thanks for pointing this out. I’ll submit a fix in a follow-up PR.
|
See #22611 |
…roject#22333) Signed-off-by: lechen <[email protected]> Signed-off-by: LeChen <[email protected]> Signed-off-by: jingyu <[email protected]>
…roject#22333) Signed-off-by: lechen <[email protected]> Signed-off-by: LeChen <[email protected]> Signed-off-by: Avery Yingyi Huang <[email protected]>
…roject#22333) Signed-off-by: lechen <[email protected]> Signed-off-by: LeChen <[email protected]>
…roject#22333) Signed-off-by: lechen <[email protected]> Signed-off-by: LeChen <[email protected]> Signed-off-by: Paul Pak <[email protected]>
…roject#22333) Signed-off-by: lechen <[email protected]> Signed-off-by: LeChen <[email protected]>
Purpose
Add "qwen" to the
eagle3_target_supported
model list in config.py, so that Qwen models can run with Eagle3 by settingspeculative_config
.Previously, "qwen" was only enabled for Eagle3 when
draft_model_config
was provided withSpeculatorsConfig
.The architectures of Qwen3-Eagle3 drafter models listed in EAGLE project are all
LlamaForCausalLMEagle3
:This needs to be added to the model registry for compatibility.
Test Plan
Speculatorsconfig
(Qwen/Qwen3-8B + AngelSlim/Qwen3-8B_eagle3).Test Result