Skip to content

Conversation

lio1226
Copy link

@lio1226 lio1226 commented Sep 30, 2025

What this PR does / why we need it?

We optimized the _prepare_input method in eagle_proposer and no longer use the _prepare_eagle_input_sequential method, improving the performance of eagle-3.

Does this PR introduce any user-facing change?

No

How was this patch tested?

python3 -m vllm.entrypoints.openai.api_server  
--host 0.0.0.0 
--port 13963
--dtype bfloat16 
--model meta-llama/Llama-3.1-8B-Instruct
--served-model-name Llama-3.1-8B-Instruct 
--tensor-parallel-size 1 
--gpu-memory-utilization 0.85   
--max-model-len  32768 
--trust-remote-code  
--seed 42  
--no-enable-prefix-caching 
--speculative_config '{"method":"eagle3","model":"yuhuili/EAGLE3-LLaMA3.1-Instruct-8B","num_speculative_tokens":2,"draft_tensor_parallel_size":1}'

Co-authored-by: QilaiZhang ([email protected] )

Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the _prepare_inputs method in eagle_proposer.py to improve performance by replacing a sequential, on-device loop with vectorized numpy operations on the CPU. This is a solid optimization. I've found one area for further improvement: there's a redundant calculation that can be removed to make the code even more efficient.

Comment on lines +683 to 691
# need use npu
query_len_per_req = (cu_target_query_lens[1:] -
cu_target_query_lens[:-1])
# [a, b, c] -> [a - n1, b - n2, c - n3]
num_tokens_per_req = query_len_per_req - num_rejected_tokens

# [a - n1, b - n2, c - n3] ->
# [0, a - n1, a + b - n1 - n2, a + b + c - n1 - n2 - n3]
cu_num_tokens = torch.zeros_like(cu_target_query_lens)
torch.cumsum(num_tokens_per_req, dim=0, out=cu_num_tokens[1:])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The calculation of cu_num_tokens on the device is redundant. The same value has already been computed on the CPU as new_query_start_loc_cpu. Instead of re-calculating it on the device, you can simply transfer the CPU tensor to the device. This avoids unnecessary computation and better aligns with the goal of optimizing performance.

Suggested change
# need use npu
query_len_per_req = (cu_target_query_lens[1:] -
cu_target_query_lens[:-1])
# [a, b, c] -> [a - n1, b - n2, c - n3]
num_tokens_per_req = query_len_per_req - num_rejected_tokens
# [a - n1, b - n2, c - n3] ->
# [0, a - n1, a + b - n1 - n2, a + b + c - n1 - n2 - n3]
cu_num_tokens = torch.zeros_like(cu_target_query_lens)
torch.cumsum(num_tokens_per_req, dim=0, out=cu_num_tokens[1:])
cu_num_tokens = new_query_start_loc_cpu.to(device, non_blocking=True)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant