Skip to content

Commit f747011

Browse files
Agoniiixueh-nv
andauthored
[vllm] fix: fix error in vllm patch for diff vllm version and add ci for moe with fp8 rollout (verl-project#4824)
### What does this PR do? fix error in vllm patch for diff vllm version and add ci for moe with fp8 rollout ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).) - [ ] If your PR is related to the `recipe` submodule, please also update the reference to the submodule commit via `git submodule update --remote` or `cd recipe && git pull origin main`. --------- Co-authored-by: Xue Huang <xueh@nvidia.com>
1 parent af0f2bb commit f747011

File tree

2 files changed

+21
-3
lines changed

2 files changed

+21
-3
lines changed

.github/workflows/e2e_ppo_trainer_megatron_vllm_2.yml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -139,6 +139,15 @@ jobs:
139139
MODEL_ID=Qwen/Qwen3-30B-A3B-Instruct-2507 USE_MBRIDGE=True VANILLA_MBRIDGE=False VALUE_VANILLA_MBRIDGE=False \
140140
COMMON_PP=2 COMMON_VPP=null COMMON_CP=1 COMMON_TP=4 COMMON_EP=4 COMMON_ETP=1 INFER_TP=8 \
141141
USE_DIST_CKPT=True ALL_OFFLOAD=True SKIP_SAVE_HF_MODEL=1 bash tests/special_e2e/run_ppo_trainer_megatron.sh
142+
- name: Running GSM8K E2E training tests with 3D parallelism with FP8 rollout on 8 L20 GPUs with Megatron-Bridge (Qwen3-30B-A3B-Instruct-2507)
143+
run: |
144+
ray stop --force
145+
ADV_ESTIMATOR=grpo USE_DUMMY_MODEL=True DUMMY_MODEL_CONFIG_PATH=tests/special_e2e/ppo_trainer/expert_parallel/qwen2moe_minimal.json \
146+
PPO_MAX_TOKEN_LEN=1024 FWD_MAX_TOKEN_LEN=1024 \
147+
MAX_PROMPT_LENGTH=512 MAX_RESPONSE_LENGTH=512 \
148+
MODEL_ID=Qwen/Qwen3-30B-A3B-Instruct-2507 USE_MBRIDGE=True VANILLA_MBRIDGE=False VALUE_VANILLA_MBRIDGE=False \
149+
COMMON_PP=2 COMMON_VPP=null COMMON_CP=1 COMMON_TP=4 COMMON_EP=4 COMMON_ETP=1 INFER_TP=2 \
150+
USE_DIST_CKPT=True ALL_OFFLOAD=True SKIP_SAVE_HF_MODEL=1 ROLLOUT_QUANTIZATION=fp8 bash tests/special_e2e/run_ppo_trainer_megatron.sh
142151
- name: clean up
143152
run: |
144153
rm -rf checkpoints

verl/utils/vllm/vllm_fp8_utils.py

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -323,7 +323,10 @@ def _create_param_from_subclass_attributes(custom_param):
323323

324324
del layer.weight_scale_inv
325325

326-
maybe_post_process_fp8_weight_block(layer)
326+
if version.parse(vllm.__version__) == version.parse("0.11.0"):
327+
maybe_post_process_fp8_weight_block(layer, self.cutlass_block_fp8_supported)
328+
else:
329+
maybe_post_process_fp8_weight_block(layer)
327330

328331

329332
def process_weights_after_loading_moe_for_vllm10(self, layer) -> None:
@@ -404,7 +407,6 @@ def _create_param_from_subclass_attributes(custom_data, custom_weight):
404407

405408
def process_weights_after_loading_moe_for_vllm11(self, layer) -> None:
406409
"""This function is used to process the weights after loading for a FusedMoE layer, it is used for vllm 0.11"""
407-
from vllm.model_executor.layers.fused_moe.rocm_aiter_fused_moe import is_rocm_aiter_moe_enabled
408410
from vllm.model_executor.layers.quantization.utils.flashinfer_utils import (
409411
swap_w13_to_w31,
410412
)
@@ -417,7 +419,14 @@ def process_weights_after_loading_moe_for_vllm11(self, layer) -> None:
417419
is_deep_gemm_e8m0_used,
418420
)
419421

420-
self.rocm_aiter_moe_enabled = is_rocm_aiter_moe_enabled()
422+
try:
423+
from vllm.model_executor.layers.fused_moe.rocm_aiter_fused_moe import is_rocm_aiter_moe_enabled
424+
425+
self.rocm_aiter_moe_enabled = is_rocm_aiter_moe_enabled()
426+
except ImportError:
427+
from vllm._aiter_ops import rocm_aiter_ops
428+
429+
self.rocm_aiter_moe_enabled = rocm_aiter_ops.is_fused_moe_enabled()
421430

422431
assert self.block_quant and self.quant_config.is_checkpoint_fp8_serialized
423432
assert self.quant_config.activation_scheme == "dynamic"

0 commit comments

Comments
 (0)