Skip to content

Conversation

@aaab8b
Copy link

@aaab8b aaab8b commented Dec 24, 2025

Change assignment of unquantized moe weights when using aiter on rocm, making it safer for reloading the weights. This will solve the random output case after wake-up and reloading weights in reinforcement learning.

Purpose

Change assignment of unquantized moe weights when using aiter on rocm, making it safer for reloading the weights. Solve the random output case after wake-up and reloading weights in reinforcement learning.

I've been doing some adaptation works for using vllm on ROLL (https://github.com/alibaba/ROLL/) with ROCm platforms. It turns out to generate random characters after vllm instance sleeps and wake-up and get model weights from reloading from train actors. Then I found out that AITER will shuffle the original weights to a better layout to be calculated, but it points to a new tensor, which will cause problem after capturing cuda graphs and reloading weights after waking up. This pull request will fix this issue in general usage of vllm when using AITER on ROCm platforms in reinforcement learning.

Test Plan

Enabling VLLM_ROCM_USE_AITER=1 and VLLM_ROCM_USE_AITER_MOE=1 testing Qwen3-30BA3B (or a moe model) after wake-up and reloading weights in reinforcement learning.

Test Result

If using previous assignment, the output will be random characters. If using my assignment, the output will be normal.

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Change assignment of unquantized moe weights when using aiter on rocm, making it safer for reloading the weights. This will solve the random output case after wake-up and reloading weights in reinforcement learning.
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant