Skip to content

Conversation

@DreamerLeader
Copy link
Contributor

@DreamerLeader DreamerLeader commented Nov 25, 2025

What this PR does / why we need it?

Does this PR introduce any user-facing change?

How was this patch tested?

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix a precision synchronization issue. My review focuses on improving code efficiency and readability. I've identified a duplicated loop in mooncake_engine.py that impacts performance and suggested a refactoring. Additionally, I've pointed out a small simplification in mooncake_store_connector_v1.py to improve code clarity.

Comment on lines +354 to +356
skip_save = False
if num_computed_token >= len(request.prompt_token_ids):
skip_save = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This block of code can be simplified to a single line for better readability and conciseness, which is a standard Python practice.

Suggested change
skip_save = False
if num_computed_token >= len(request.prompt_token_ids):
skip_save = True
skip_save = num_computed_token >= len(request.prompt_token_ids)

if save_spec is None or not save_spec.can_save:
continue

torch.npu.synchronize()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You probably need to delete torch.npu.current_stream().synchronize() from the https://github.com/vllm-project/vllm-ascend/blob/main/vllm_ascend/distributed/mooncake/kv_transfer.py file, right? Also, is there any difference between torch.npu.current_stream().synchronize() and torch.npu.synchronize()?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants