[ROCm][perf] Shuffle KV cache to use paged_attention_common#32914
[ROCm][perf] Shuffle KV cache to use paged_attention_common#32914samutamm wants to merge 9 commits intovllm-project:mainfrom
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run You ask your reviewers to trigger select CI tests on top of Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add If you have any questions, please reach out to us on Slack at https://slack.vllm.ai. 🚀 |
There was a problem hiding this comment.
Code Review
The pull request updates the AITER branch in the Dockerfile and integrates aiter.paged_attention_common for shuffle KV cache handling in rocm_aiter_fa.py. This change aims to fix performance issues with small concurrencies for specific Qwen models. The introduction of temporary tensors (tmp_out, exp_sums, max_logits) and new scaling parameters (K_QScale_hip, V_QScale_hip, K_QScale_asm, V_QScale_asm) to the paged_attention_common function is a significant update to the attention mechanism. I've identified a couple of issues related to variable redefinition and unreachable code that should be addressed.
ea196ed to
6cf3af5
Compare
|
So we would usually split this PR into Upgrade Aiter version first, then only introduce new Kernel. |
|
We will keep this PR in check, once we have AITER commit version upgraded and if it contains the kernel, then we will continue with this PR. |
Signed-off-by: Samu Tamminen <[email protected]>
Signed-off-by: Samu Tamminen <[email protected]>
Signed-off-by: Samu Tamminen <[email protected]>
Signed-off-by: Samu Tamminen <[email protected]>
Signed-off-by: Samu Tamminen <[email protected]>
Signed-off-by: Samu Tamminen <[email protected]>
778460c to
3d36878
Compare
|
Hi @samutamm, the pre-commit checks have failed. Please run: uv pip install pre-commit
pre-commit install
pre-commit run --all-filesThen, commit the changes and push to your branch. For future commits, Tip Is
|
Signed-off-by: Samu Tamminen <[email protected]>
|
Hi @samutamm, the pre-commit checks have failed. Please run: uv pip install pre-commit
pre-commit install
pre-commit run --all-filesThen, commit the changes and push to your branch. For future commits, Tip Is
|
Signed-off-by: Samu Tamminen <[email protected]>
|
Hi @samutamm, the pre-commit checks have failed. Please run: uv pip install pre-commit
pre-commit install
pre-commit run --all-filesThen, commit the changes and push to your branch. For future commits, Tip Is
|
|
Hi @samutamm, the pre-commit checks have failed. Please run: uv pip install pre-commit
pre-commit install
pre-commit run --all-filesThen, commit the changes and push to your branch. For future commits, Tip Is
|
Purpose
For Qwen/Qwen3-235B-A22B-Instruct-2507-FP8 model, currently
VLLM_ROCM_SHUFFLE_KV_CACHE_LAYOUT=1performs worse on small concurrencies, compared toVLLM_ROCM_SHUFFLE_KV_CACHE_LAYOUT=0. This PR fixes the issue usingpaged_attention_commonfrom aiter (see ROCm/aiter#1821).Test Plan
For input and output lengths of 1k and 8k and concurrencies from 8, 18, 32, 64, 128, compare current main branch with and without VLLM_ROCM_SHUFFLE_KV_CACHE_LAYOUT (_vllm_main_shuffle1 and _vllm_main_shuffle0, respectively) to changes of this PR (_pr_shuffle1).
Also verified on MI355.
Also verified for Qwen/Qwen3-235B-A22B-Instruct-2507.
Test Result
For input length 8k and output length 1k (green lines), the changes of this PR (_pr_shuffle1, the solid line) outperform main branch, with or without shuffle kv cache.
For input length 1k and output length 8k (orange lines), the changes of this PR (_pr_shuffle1, the solid line) outperform main branch, with or without shuffle kv cache.
For input length 1k and output length 1k (blue lines), the changes of this PR (_pr_shuffle1, the solid line) are very close to main branch. This might require further adjustment in aiter
paged_attention_common.Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.