Skip to content

Conversation

shen-shanshan
Copy link
Collaborator

@shen-shanshan shen-shanshan commented Aug 14, 2025

What this PR does / why we need it?

Refactor AscendAttentionMetadataBuilder for better extensibility and make the builder class of torchair extend from it.

Extract _assemble_build_info() and _assemble_attn_metadata() method from build() in AscendAttentionMetadataBuilder for better extensibility.

Workflow of build() method:

  • Prepare build info: the common logic of preparing build info.
  • _assemble_build_info(): the custom logic that can be overwritten in torchair_attention.py.
  • _assemble_attn_metadata(): the custom logic that can be overwritten in torchair_attention.py.

After this refactor, we can remove the build() method in AscendAttentionTorchairMetadataBuilder, and just need to overwrite these two methods: _assemble_build_info() and _assemble_attn_metadata().

Note

Do not merge this PR before #2017.

Does this PR introduce any user-facing change?

How was this patch tested?

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the build method in AscendAttentionMetadataBuilder by extracting _prepare_build_info and _assemble_build_info. This is a good change that improves modularity and extensibility, as demonstrated by the new AscendAttentionTorchairMetadataBuilder. The implementation is solid, but I've found one area in the new torchair_attention.py file with some confusing code that could be clarified.

Comment on lines 247 to 178
pad_value = 0
num_token_pad_size = graph_pad_size - num_actual_tokens
num_reqs_pad_size = (
graph_pad_size // self.runner.decode_token_per_req -
num_reqs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The variable pad_value is assigned the value 0 on line 247, but this value is never used because it is unconditionally reassigned to 1 on line 252 before its first use. This makes the assignment on line 247 dead code, which is confusing and should be removed to improve clarity.

                    num_token_pad_size = graph_pad_size - num_actual_tokens
                    num_reqs_pad_size = (
                        graph_pad_size // self.runner.decode_token_per_req -
                        num_reqs)

Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@shen-shanshan shen-shanshan changed the title [4/N][Refactor] Extract _prepare_build_info() and _assemble_build_info() from build() in AscendAttentionMetadataBuilder [4/N][Refactor] Refactor AscendAttentionMetadataBuilder for better extensibility and make the builder class of torchair extend from it Aug 15, 2025
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Copy link

codecov bot commented Aug 22, 2025

Codecov Report

❌ Patch coverage is 92.00000% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 78.01%. Comparing base (5d8ec28) to head (cafae88).

Files with missing lines Patch % Lines
vllm_ascend/attention/attention_v1.py 92.00% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2375      +/-   ##
==========================================
+ Coverage   77.99%   78.01%   +0.02%     
==========================================
  Files         134      134              
  Lines       18498    18515      +17     
==========================================
+ Hits        14427    14444      +17     
  Misses       4071     4071              
Flag Coverage Δ
unittests 78.01% <92.00%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@shen-shanshan shen-shanshan added ready-for-test start test by label for PR accuracy-test enable all accuracy test for PR labels Aug 25, 2025
@shen-shanshan shen-shanshan force-pushed the air branch 3 times, most recently from cafae88 to 5ae0c6f Compare August 29, 2025 08:47
…make the builder class of torchair extend from it

Signed-off-by: shen-shanshan <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accuracy-test enable all accuracy test for PR ready-for-test start test by label for PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant