Skip to content

Conversation

@chenjiaoAngel
Copy link
Contributor

@chenjiaoAngel chenjiaoAngel commented Dec 26, 2025

What does this PR do?

Add concise overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review.

User Trainer node to do validate process when run mode on fully-async, It can save time for validate computing and reduce perf/time_of_step peak

  • add new use_trainer_do_validate on fully_async async_training config to decide whether using trainer node to do validate process
  • use_trainer_do_validate: default is false
  • It can improve performance of validate, such as in dapo_7b_math_fsdp2_8_8.sh, it can improve about 1X speed
image image
  • optimized the process_validation_metrics() on _validate() process, when input datasets len=1444, it latency reduce from 150+s to 40+s
image image

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

  • on test scripts such as dapo_7b_math_fsdp2_8_8.sh add async_training.use_trainer_do_validate=True command to do compute
  • the result of this function on Qwen2.5-Math-7B model
  • the baseline scripts is dapo_7b_math_fsdp2_8_8.sh
  • the optimized scripts is dapo_7b_math_fsdp2_8_8.sh +async_training.use_trainer_do_validate=True
  • the acc and perfomance is below:
image image
  • green: optimized case (async_training.use_trainer_do_validate=True )
  • gray: baseline case (async_training.use_trainer_do_validate=False )

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this
async_training.use_trainer_do_validate=True \

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the functionality to perform validation on the trainer node, controlled by the use_trainer_do_validate flag. The changes span across configuration, worker logic, data handling, and synchronization. Key modifications include adding the use_trainer_do_validate flag, conditionally assigning the ActorRollout role to the trainer, splitting the validation dataset, and updating the trainer to merge validation metrics. The overall approach is sound, but I've identified a few critical issues related to robustness and potential runtime errors that should be addressed. Specifically, there are concerns with how unique actor names are generated, how validation results are merged, and how datasets are split. These are detailed in the comments.

@chenjiaoAngel chenjiaoAngel force-pushed the up_fully_validate branch 2 times, most recently from 4c116d2 to 5b5d5cd Compare December 26, 2025 12:58
@chenjiaoAngel chenjiaoAngel changed the title [fsdp, fully-async] feat: add validate process on trainer node when use_trainer_do_valida… [fsdp, fully-async] feat: add validate process on trainer node when use_trainer_do_validate=True … Dec 29, 2025
@ArronHZG ArronHZG self-requested a review December 29, 2025 06:51
@wuxibin89 wuxibin89 changed the title [fsdp, fully-async] feat: add validate process on trainer node when use_trainer_do_validate=True … [fsdp] feat: add validate process on trainer node when use_trainer_do_validate=True … Jan 8, 2026
@wuxibin89 wuxibin89 changed the title [fsdp] feat: add validate process on trainer node when use_trainer_do_validate=True … [fsdp] feat: add validate process on trainer node when use_trainer_do_validate=True Jan 8, 2026
@wuxibin89
Copy link
Collaborator

Please format code according to: https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md

@chenjiaoAngel
Copy link
Contributor Author

Please format code according to: https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md

已完成~

images, videos = process_vision_info(messages, image_patch_size=image_patch_size, return_video_metadata=True)
return images, videos

def split(self, num_splits: int):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we need split dataset?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we need split dataset?

We need to split dataset to rollout and trainer node according to ratio of trainer:rollout

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it better to split validate files directly instead of split RLDataset?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it better to split validate files directly instead of split RLDataset?

split validate file this way may be study, i know the easy way to split datasets; I will check this way whether it easy to work

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we split the data directly from the file, we need to save two temporary files and then use rl_dataset to get the data class; this seems to involve redundant read/write operations. If we split the data directly from the dataset object, my understanding is that we only need one round of I/O read/write, and there is no need to save intermediate files.

@wuxibin89
Copy link
Collaborator

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature to perform validation on the trainer node, which can significantly improve performance by overlapping validation with training. The implementation is well-structured, touching upon configuration, documentation, worker logic, and metric collection to support this new validation strategy. Additionally, the performance optimization in process_validation_metrics is a great improvement. I've identified one critical issue in the dataset splitting logic that could prevent this feature from being used in many common scenarios.

@wuxibin89 wuxibin89 merged commit bb64f42 into volcengine:main Jan 8, 2026
70 of 71 checks passed
zhtmike added a commit to zhtmike/verl that referenced this pull request Jan 9, 2026
* [recipe] feat: migrate `recipe` to the dedicated repo `verl-recipe` as a submodule (#4795)

### What does this PR do?

This PR

1. migrates most recipes from the `recipe` directory to the dedicated
repo [`verl-recipe`](https://github.com/verl-project/verl-recipe),
2. adds `verl-recipe` as a submodule,
3. adds instruction to update the submodule reference in the PR
template,
4. migrates [`transfer_queue`](verl/experimental/transfer_queue),
[`fully_async_policy`](verl/experimental/fully_async_policy),
[`one_step_off_policy`](verl/experimental/one_step_off_policy) and
[`vla`](verl/experimental/vla) to
[`verl/experimental`](verl/experimental) since they are planned to be
merged into the main library,
5. updates related CI and paths accordingly,
6. updates the README news and awesome projects about this migration,
7. forces into a single commit and tries its best to recognize `rename`
to keep the commit history trackable.

See the "conjugate" PR at
https://github.com/verl-project/verl-recipe/pull/7.

### Test

See the CI.

### Todo

- [ ] Ignore the final PR commit in git blame if it shows up too
frequently.

* [model] fix: fix temp dtype (#4813)

### What does this PR do?

- As title. Prevent temperature to be int.

### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* [vllm, sglang, rollout] fix: Fix a mistake when running run_qwen3_vl-30b-megatron.sh with latest verl and vllm0.12 (#4810)

* [ckpt] feat: add checkpoint-engine abstraction (#4775)

### What does this PR do?

Add Checkpoint Engine abstraction.

#### Overview
Checkpoint Engine is an unified abstract layer to synchronize weights
between various training backends and inference backends. It provides
three unified APIs:
- send_weights: get named tensors from generator and send them in
streaming manner.
- receive_weights: return a tensor generator that yield named tensors in
streaming manner.
- get_weights: return a tensor generator that yield named tensors in
streaming manner, used for each inference instance update weight
independently from local cache (e.g share memory, disk).

For more detail, see `verl/checkpoint_engine/README.md`.

#### verl core
<img width="640" height="167" alt="image"
src="https://github.com/user-attachments/assets/fbd125d7-b461-4c89-9678-b95a2ef89c33"
/>

#### checkpoint engine
<img width="1004" height="409" alt="checkpoint-engine"
src="https://github.com/user-attachments/assets/fc263c1f-17b2-4579-9842-87b24e12abc7"
/>

* [doc, ci] fix: Update Ascend doc and fix e2e_ascend CI (#4816)

### What does this PR do?

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

As title.

### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

Not related.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

Not related.

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

Not related.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.

* [trainer] feat: VeOmniEngine supports qwen3_vl ulysses (#4806)

### What does this PR do?

as title.

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

* [doc] chore: fix checkpoint engine image link (#4821)

### What does this PR do?

As title

* [hardware] fix: automatically set device for SFT case (#4828)

### What does this PR do?

auto_set_device does not cover SFT case.

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.

* [data] feat: TransferQueue - Update TransferQueue version and docs (#4829)

### What does this PR do?

- Update TQ to formal release version.
- Fix the shallow copy bug for chunking `BatchMeta`
https://gitcode.com/Ascend/TransferQueue/pull/2
- Fix race condition for modifying torch num_threads
https://gitcode.com/Ascend/TransferQueue/pull/5
- More robust port binding
https://gitcode.com/Ascend/TransferQueue/pull/3
- Optimize memory usage for zero-copy transfer
https://github.com/TransferQueue/TransferQueue/pull/163
- add check_data_production_status and check_consumption_status and
support polling get metadata
https://github.com/TransferQueue/TransferQueue/pull/157 @NINGBENZHE
- (alpha) Support Mooncake Store backend
https://github.com/TransferQueue/TransferQueue/pull/162 @zhaohaidao
- (alpha) Support Ray RDT backend
https://github.com/TransferQueue/TransferQueue/pull/167

- Update docs.

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [x] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.

---------

Signed-off-by: 0oshowero0 <o0shower0o@outlook.com>

* [doc] Update docs about fully_async_policy (#4826)

### What does this PR do?

Update documentation about fully_async_policy and adjust the formatting
of the table.

---------

Co-authored-by: jsfanfanfan <2981866535@qq.com>

* [ckpt] fix: FSDP save ckpt after validation (#4799)

### What does this PR do?

This PR fixes a bug in the `save_checkpoint` function for FSDPEngine.

In the original logic, if the model engine is used
(`use_legacy_worker_impl=disable`), the `wake_up` function in
`verl/workers/engine_workers.py` will be invoked during the rollout
phase of each step, which will offload the model to CPU.

Under normal circumstances, the `compute_log_prob` function called
during the training phase can load the model back to GPU. However, the
training process is not executed during the validation phase, leaving
the model on the CPU. If a checkpoint is saved immediately after
validation, it will trigger the following error: `AssertionError:
Expects tensor to be on the compute device cuda:0, was on cpu.`

<details>
<summary>Details</summary>

Script:
```
set -x
python examples/data_preprocess/geo3k.py --local_dir ~/data/geo3k
python -m verl.trainer.main_ppo \
  algorithm.adv_estimator=grpo \
  data.train_files=$HOME/data/geo3k/train.parquet \
  data.val_files=$HOME/data/geo3k/test.parquet \
  data.train_batch_size=512 \
  data.max_prompt_length=1024 \
  data.max_response_length=2048 \
  data.filter_overlong_prompts=True \
  data.truncation='error' \
  data.image_key=images \
  actor_rollout_ref.model.path=Qwen/Qwen2.5-VL-3B-Instruct \
  actor_rollout_ref.actor.optim.lr=1e-6 \
  actor_rollout_ref.model.use_remove_padding=True \
  actor_rollout_ref.actor.ppo_mini_batch_size=128 \
  actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=4 \
  actor_rollout_ref.actor.use_kl_loss=True \
  actor_rollout_ref.actor.kl_loss_coef=0.01 \
  actor_rollout_ref.actor.kl_loss_type=low_var_kl \
  actor_rollout_ref.actor.entropy_coeff=0 \
  actor_rollout_ref.model.enable_gradient_checkpointing=True \
  actor_rollout_ref.actor.fsdp_config.param_offload=False \
  actor_rollout_ref.actor.fsdp_config.optimizer_offload=False \
  actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=16 \
  actor_rollout_ref.rollout.tensor_model_parallel_size=2 \
  actor_rollout_ref.rollout.name=vllm \
  actor_rollout_ref.rollout.gpu_memory_utilization=0.6 \
  actor_rollout_ref.rollout.enable_chunked_prefill=False \
  actor_rollout_ref.rollout.enforce_eager=False \
  actor_rollout_ref.rollout.free_cache_engine=False \
  actor_rollout_ref.rollout.n=5 \
  actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=16 \
  actor_rollout_ref.ref.fsdp_config.param_offload=False \
  algorithm.use_kl_in_reward=False \
  trainer.use_legacy_worker_impl=disable \
  trainer.critic_warmup=0 \
  trainer.logger=['console','wandb'] \
  trainer.project_name='verl_ci_grpo_example_geo3k' \
  trainer.experiment_name='qwen2_5_vl_3b_function_rm' \
  trainer.n_gpus_per_node=8 \
  trainer.nnodes=1 \
  trainer.log_val_generations=20 \
  trainer.save_freq=5 \
  trainer.test_freq=5 \
  trainer.total_epochs=15
```

Error:

  ```
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) ERROR:2026-01-05
07:35:49,128:Got error when executing task.
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) Traceback (most
recent call last):
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"python/ray/_raylet.pyx", line 1890, in ray._raylet.execute_task
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"python/ray/_raylet.pyx", line 1998, in ray._raylet.execute_task
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"python/ray/_raylet.pyx", line 1897, in ray._raylet.execute_task
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"python/ray/_raylet.pyx", line 1825, in
ray._raylet.execute_task.function_executor
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"python/ray/_raylet.pyx", line 4651, in
ray._raylet.CoreWorker.run_async_func_or_coro_in_event_loop
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/lib/python3.12/concurrent/futures/_base.py", line 449, in result
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return
self.__get_result()
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/lib/python3.12/concurrent/futures/_base.py", line 401, in
__get_result
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) raise
self._exception
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"python/ray/_raylet.pyx", line 4638, in async_func
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/ray/_private/async_compat.py",
line 50, in wrapper
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return func(*args,
**kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/ray/_private/function_manager.py",
line 691, in actor_method_executor
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return
method(__ray_actor, *args, **kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/ray/util/tracing/tracing_helper.py",
line 463, in _resume_span
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return
method(self, *_args, **_kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/single_controller/ray/base.py", line 841, in
func
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return
getattr(self.worker_dict[key], name)(*args, **kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/single_controller/base/decorator.py", line
456, in inner
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return func(*args,
**kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/utils/transferqueue_utils.py", line 314, in
dummy_inner
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) output =
func(*args, **kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/workers/engine_workers.py", line 541, in
save_checkpoint
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
self.actor.save_checkpoint(local_path, hdfs_path, global_step,
max_ckpt_to_keep)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/single_controller/base/decorator.py", line
456, in inner
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return func(*args,
**kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/utils/transferqueue_utils.py", line 314, in
dummy_inner
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) output =
func(*args, **kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/workers/engine_workers.py", line 343, in
save_checkpoint
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return
self.engine.save_checkpoint(local_path, hdfs_path, global_step,
max_ckpt_to_keep)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/workers/engine/fsdp/transformer_impl.py",
line 607, in save_checkpoint
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
self.checkpoint_manager.save_checkpoint(
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/opt/tiger/open_verl/verl/utils/checkpoint/fsdp_checkpoint_manager.py",
line 238, in save_checkpoint
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) model_state_dict =
self.model.state_dict()
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py",
line 2256, in state_dict
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) hook(self, prefix,
keep_vars)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py",
line 120, in decorate_context
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return func(*args,
**kwargs)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_state_dict_utils.py",
line 777, in _pre_state_dict_hook
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
_pre_state_dict_hook_fn[fsdp_state._state_dict_type](
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_state_dict_utils.py",
line 517, in _sharded_pre_state_dict_hook
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
_common_unshard_pre_state_dict_hook(
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_state_dict_utils.py",
line 161, in _common_unshard_pre_state_dict_hook
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
_enter_unshard_params_ctx(
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_state_dict_utils.py",
line 125, in _enter_unshard_params_ctx
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
fsdp_state._unshard_params_ctx[module].__enter__()
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/lib/python3.12/contextlib.py", line 137, in __enter__
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) return
next(self.gen)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) ^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_unshard_param_utils.py",
line 199, in _unshard_fsdp_state_params
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) _unshard(state,
handle, computation_stream, computation_stream)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_runtime_utils.py",
line 290, in _unshard
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) ran_pre_unshard =
handle.pre_unshard()
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
^^^^^^^^^^^^^^^^^^^^
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_flat_param.py",
line 1303, in pre_unshard
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])
self._check_on_compute_device(self.flat_param)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_flat_param.py",
line 2582, in _check_on_compute_device
  (WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f])     _p_assert(
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) File
"/usr/local/lib/python3.12/dist-packages/torch/distributed/utils.py",
line 159, in _p_assert
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) raise
AssertionError(s)
(WorkerDict pid=42417, ip=[fdbd:dccd:cdd2:2207::30f]) AssertionError:
Expects tensor to be on the compute device cuda:0, was on cpu
  ```
</details>

To fix this bug, this PR checks whether the model is located on the CPU
before saving the checkpoint and loads it onto the GPU if that is the
case. The same bug also exists in Megatron, which requires further
fixes.

---------

Co-authored-by: weidongliang.339 <weidongliang.339@bytedance.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* [perf] feat: Add MFU for Qwen3-VL dense (#4753)

### What does this PR do?
Add the _estimate_qwen3_vit_flop and _estimate_qwen3_vl_flops function
to calculate the FLOPs of Qwen3-VL dense models. Update the test cases
to verify the calculation accuracy of Qwen3-VL models.

### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test
The following is the output result of running the test file.
<img width="1271" height="152" alt="image"
src="https://github.com/user-attachments/assets/2a3d426c-bd32-4369-9c07-c8a17c60e98b"
/>

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

* [tool] fix: avoid nested ToolResponse in SandboxFusionTool (#4833)

### What does this PR do?

Fix an incorrect double-wrapping of `ToolResponse` in
`SandboxFusionTool.execute()`.

- `execute_code()` already returns a `ToolResponse`, but `execute()`
previously wrapped it again as `ToolResponse(text=result)`.
- Since `ToolResponse.text` expects `str | None`, the old behavior could
produce an invalid/nested response (or confusing stringified output).
- This PR makes `execute()` return the `ToolResponse` directly when
appropriate, and only wraps once when the worker returns a
non-`ToolResponse` result.

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here:
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

- pre-commit:
  - `pre-commit install`
  - `pre-commit run --all-files --show-diff-on-failure --color=always`
- Result: **Passed**
(ruff/format/mypy/autogen-trainer-cfg/docstring/license/compileall)

### API and Usage Example

No API changes. `SandboxFusionTool.execute()` still returns
`tuple[ToolResponse, float, dict]`.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

- `verl/tools/sandbox_fusion_tools.py`
- If the execution worker returns a `ToolResponse`, return it directly.
- Otherwise, convert the result to `str` (or `None`) and wrap once as
`ToolResponse(text=...)`.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the Contribute Guide:
https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md
- [x] Apply pre-commit checks: `pre-commit install && pre-commit run
--all-files --show-diff-on-failure --color=always`
- [ ] Add / Update the documentation:
https://github.com/volcengine/verl/tree/main/docs
  - Not needed for this small bug fix.
- [ ] Add unit or end-to-end test(s) to the CI workflow:
https://github.com/volcengine/verl/tree/main/.github/workflows
- Not added. This change is a small correctness fix and is covered by
existing type/validation expectations; pre-commit checks passed.
- [ ] Once your PR is ready for CI, send a message in the `ci-request`
channel:
  - https://verl-project.slack.com/archives/C091TCESWB1
- If not accessible:
https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a
- [ ] Recipe submodule update (if applicable).
  - Not applicable for this PR.

Co-authored-by: winston <email@example.com>

* [vllm] fix: fix error in vllm patch for diff vllm version and add ci for moe with fp8 rollout (#4824)

### What does this PR do?

fix error in vllm patch for diff vllm version and add ci for moe with
fp8 rollout

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.

---------

Co-authored-by: Xue Huang <xueh@nvidia.com>

* [algo] feat: add optimal token baseline and variance proxy (#4678)

# Optimal Token Baseline

## Main feature
- Register `AdvantageEstimator.OPTIMAL_TOKEN_BASELINE`.
- Extend the DP actor to emit `sum_pi_squared`, expose
`calculate_sum_pi_squared` and checkpointing toggles across configs, and
add a reusable `calculate_sum_pi_squared_from_logits` function.
- Introduce `compute_variance_proxy_metrics` to surface signal/total
power/noise diagnostics during training.
- Document the method in `docs/algo/otb.md` and ship an executable
example at `examples/otb_trainer/run_qwen2_5-7b.sh`.

## Usage
- Enable OTB by overriding config keys (OmegaConf overlay):
  ```yaml
  algorithm.adv_estimator: optimal_token_baseline
  actor_rollout_ref:
    actor:
      calculate_sum_pi_squared: true
      sum_pi_squared_checkpointing: false  # optional for long contexts
    rollout:
      n: 8
  ```
- Run the example script (adjust dataset paths & WandB project as
needed):
  ```bash
  bash examples/otb_trainer/run_qwen2_5-7b.sh 
  ```
- Monitor the new variance proxies in trainer logs:
`variance_proxy/proxy1_signal_strength`, `proxy2_total_power`,
`proxy3_pure_noise`.

## keyNote
- `actor.calculate_sum_pi_squared` requires
`actor_rollout_ref.model.use_fused_kernels=False`; fused kernels must
surface logits before OTB can run there.
- Group sampling is mandatory (`rollout.n > 1`); with single-rollout
batches OTB collapses to vanilla returns.

---

UPDATE(@tongyx361 ): `compute_sum_pi_squared` is changed to
`calculate_sum_pi_squared` for consistency with `calculate_entropy`.

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Shawn/Yuxuan Tong <tongyuxuan361@gmail.com>

* [megatron] fix: Fix error in megatron workers (#4832)

### What does this PR do?
There is a bug in megatron_workers.py, 745 line is redundant and
introduces a bug. It overwrites the estimated_flops and promised_flops
calculated on lines 742-744.
Also, the condition "vl" in func.__name__ is brittle as it relies on a
naming convention. This could lead to silent miscalculations of MFU if a
new vision-language model's estimation function is named differently. A
more robust approach is to attempt calling the function with the extra
arguments and handle the TypeError if it doesn't support them.

### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.

* [misc] feat: delete unnecessary base class in agent loop worker and vLLMHttpServer (#4838)

* [misc] feat: consolidate tensordict before dispatch (#4830)

* [training_utils] fix: json encode error in filelogger (#4811)

### What does this PR do?

- fix: json encode error in filelogger
error message: "TypeError: Object of type int32 is not JSON
serializable"
- ensure it's not Tensor object when logging to metrics

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [x] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [x] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [x] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.

Signed-off-by: zhuangqh <zhuangqhc@gmail.com>

* [ckpt] chore: skip saving hf_checkpoint during megatron+lora training & add a separate lora merge script (#4839)

### What does this PR do?

When using LoRA, MegatronCheckpointManager.save_checkpoint not only
saves the adapter but also saves the huggingface checkpoint, which is
unnecessary. This PR skips saving the huggingface checkpoint, and
provides a separate script for merging the adapter.

Relating to #4063 

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here:
https://github.com/volcengine/verl/pulls?q=is%3Apr+megatron+lora+save
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```bash
python ./scripts/megatron_merge_lora.py \
    --config-name='ppo_megatron_trainer' \
    actor_rollout_ref.model.lora.adapter_path=$APAPTER_PATH \
    ... # same config as your training script
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [x] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.

* [rollout, vllm] fix: accuracy issue in verl serve mode + vllm-ascend + dp + ep + tp scenarios (#4783)

### What does this PR do?
Fix the accuracy issue in verl + vllm-ascend dp+ep+tp+server scenarios,
issue:https://github.com/vllm-project/vllm-ascend/issues/5544

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test
Tested GRPO on local NPU host:
<img width="1047" height="117"
alt="58274edd-d0d3-454c-8e39-3188f6f19e71"
src="https://github.com/user-attachments/assets/dee7bf2f-6faf-4f44-a8b3-64670d5b1e10"
/>

### Design & Code Changes
Root cause analysis: currently, the version of Verl + Ascend NPU +
vllm-ascend is
[v0.11.0](https://verl.readthedocs.io/en/latest/ascend_tutorial/ascend_quick_start.html).
In the vllm-ascend v0.11.0 code, the all2all backend
(flashinfer_all2allv) is selected and updated to the vllm worker
environment. However, verl's ExternalZeroMQDistributedExecutor does not
pass this environment to the vllm worker processes like vllm's
[RayDistributedExecutor](https://github.com/vllm-project/vllm/blob/0d4044edd85de30d7d4558aeea4d1e95c7c556d6/vllm/v1/executor/ray_executor.py#L337)
backend does. Therefore, due to the all2all backend for vllm-ascend is
wrong, and then there is a precision issue on vllm-ascend.

Implementation:
1. In vLLMAsyncRollout, when initiating vllm work, if it's an NPU
scenario, add the environment variables required by vllm-ascend.
2. Add vllm engine environment variables setting in rollout.yaml,
supports setting by user.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

Co-authored-by: FightingZhen

---------

Signed-off-by: leo-pony <nengjunma@outlook.com>

* [fsdp] feat: add validate process on trainer node when use_trainer_do_validate=True (#4683)

### What does this PR do?

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

User Trainer node to do validate process when run mode on fully-async,
It can save time for validate computing and reduce perf/time_of_step
peak
- add new use_trainer_do_validate on fully_async async_training config
to decide whether using trainer node to do validate process
- use_trainer_do_validate: default is false
- It can improve performance of validate, such as in
`dapo_7b_math_fsdp2_8_8.sh`, it can improve about 1X speed

<img width="1440" height="608" alt="image"
src="https://github.com/user-attachments/assets/436e481e-4f51-4e8e-ad08-b038b3f0e89d"
/>
<img width="1030" height="762" alt="image"
src="https://github.com/user-attachments/assets/ed8e3237-d37d-4eff-b944-fb81ea63f87c"
/>

- optimized the `process_validation_metrics()` on `_validate()` process,
when input datasets len=1444, it latency reduce from 150+s to 40+s

<img width="2630" height="448" alt="image"
src="https://github.com/user-attachments/assets/b6fb50bc-5856-49c1-91dc-f845e9c410b4"
/>
<img width="2504" height="518" alt="image"
src="https://github.com/user-attachments/assets/b3b5f238-0c5e-4c63-9683-83f34d5a46fd"
/>


### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

- on test scripts such as `dapo_7b_math_fsdp2_8_8.sh` add
`async_training.use_trainer_do_validate=True` command to do compute
- the result of this function on Qwen2.5-Math-7B model
- the baseline scripts is `dapo_7b_math_fsdp2_8_8.sh` 
- the optimized scripts is `dapo_7b_math_fsdp2_8_8.sh`
+`async_training.use_trainer_do_validate=True`
- the acc and perfomance is below:
<img width="1650" height="702" alt="image"
src="https://github.com/user-attachments/assets/3419d7bb-a64c-4fe9-b776-3312925f51ab"
/>
<img width="1580" height="522" alt="image"
src="https://github.com/user-attachments/assets/2c3a7e24-7421-4f12-8527-7b997f9c3b89"
/>

- green: optimized case (`async_training.use_trainer_do_validate=True` )
- gray: baseline case (`async_training.use_trainer_do_validate=False` )


### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
async_training.use_trainer_do_validate=True \
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: Shangwei-Li <lishangwei@mail.ustc.edu.cn>

* [misc] fix: recipe submodule accidentally been removed (#4843)

### What does this PR do?

As title.

* [worker, training_utils] fix: Engine Metric Aggregation (#4778)

### What does this PR do?

Because some metrics are scaled by global_bsz/global_tokens in
`workers.utils.losses.ppo_loss`, the mean in `reduce_metrics` adds an
extra scaling of the metric by the number of gradient accumulation steps
(see examples in Test sec):

https://github.com/volcengine/verl/blob/c191c5eb5c9499dca6666a52bc5f360bfd4bbf4f/verl/utils/metric/utils.py#L53

Aggregation of the `loss` metric handles this by taking sum:
https://github.com/volcengine/verl/blob/c191c5eb5c9499dca6666a52bc5f360bfd4bbf4f/verl/workers/engine_workers.py#L143-L144
Depending on how metrics are handled in `workers.utils.losses.ppo_loss`,
it may not be correct to aggregate all of them using sum (as in #4785).
For example, `actor/pg_loss` and `actor/kl_loss` are scaled by global
batch sizes/ token counts, and should be aggregated using sum, while the
`pg_metrics` `pg_clipfrac`, `ppo_kl`, and `pg_clipfrac_lower` are scaled
by local token counts and should be aggregated using mean.

This PR introduces a metric management class to allow flexibility in
deciding the aggregation type on a per-metric basis.

### Test

This test demonstrates the scaling of metrics with the number of
gradient accumulation steps, as well as how this is resolved on this
branch. The command for running is below.

<img width="980" height="638" alt="image"
src="https://github.com/user-attachments/assets/e65ab291-3125-4df4-a0e0-3473bf64cb2a"
/>

```bash
gsm8k_train_path=$DATA_DIR/gsm8k/train.parquet
gsm8k_test_path=$DATA_DIR/gsm8k/test.parquet

train_files="['$gsm8k_train_path']"
test_files="['$gsm8k_test_path']"

ppo_micro_batch_size_per_gpu=2
ppo_micro_batch_size_per_gpu=8

branch=main
branch=fixEngineMetrics

python3 -m verl.trainer.main_ppo \
    algorithm.adv_estimator=grpo \
    data.dataloader_num_workers=0 \
    data.return_full_prompt=True \
    data.train_files="$train_files" \
    data.val_files="$test_files" \
    data.train_batch_size=8 \
    data.max_prompt_length=512 \
    data.max_response_length=1024 \
    data.filter_overlong_prompts=True \
    data.truncation='error' \
    actor_rollout_ref.model.path=Qwen/Qwen3-0.6B \
    actor_rollout_ref.actor.optim.lr=1e-6 \
    actor_rollout_ref.model.use_remove_padding=True \
    actor_rollout_ref.actor.ppo_mini_batch_size=8 \
    actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=$ppo_micro_batch_size_per_gpu \
    actor_rollout_ref.actor.use_kl_loss=True \
    actor_rollout_ref.actor.fsdp_config.param_offload=True \
    actor_rollout_ref.actor.fsdp_config.optimizer_offload=True \
    actor_rollout_ref.rollout.tensor_model_parallel_size=1 \
    actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=8 \
    actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=8 \
    actor_rollout_ref.rollout.name=vllm \
    actor_rollout_ref.rollout.gpu_memory_utilization=0.6 \
    actor_rollout_ref.rollout.n=5 \
    trainer.logger='["console","wandb"]' \
    trainer.project_name='fixEngineMetrics' \
    trainer.experiment_name="$branch/ppo_micro_batch_size_per_gpu$ppo_micro_batch_size_per_gpu" \
    trainer.n_gpus_per_node=2 \
    trainer.nnodes=1 \
    trainer.save_freq=400 \
    trainer.test_freq=40 \
    trainer.use_legacy_worker_impl=disable \
    trainer.total_epochs=2 \
    trainer.total_training_steps=10 \
    trainer.resume_mode=disable \
    actor_rollout_ref.actor.use_torch_compile=False \
    actor_rollout_ref.actor.fsdp_config.use_torch_compile=False \
    trainer.val_before_train=False \
    actor_rollout_ref.rollout.enforce_eager=True \
    actor_rollout_ref.ref.fsdp_config.use_torch_compile=False
```

### Design & Code Changes

Adds a `Metric` class which tracks metric values and aggregation type.

* [rollout] fix: configurable agent loop + multimodal data for fully-async (#4842)

## Description

* **`verl/experimental/fully_async_policy/agent_loop/agent_loop.py`**

* Use `config.agent.default_agent_loop` as the default `agent_name` when
`agent_name` is not present in `batch.non_tensor_batch`.
* Pass `dataset_cls=self.dataset_cls` and
`dataset_config=self.config.data` into `hydra.utils.instantiate(...)`
when creating an agent loop instance.

*
**`verl/experimental/fully_async_policy/agent_loop/partial_tool_agent_loop.py`**

* Extract `video_data` from `multi_modal_data` and include `video_data`
in the created `AgentData` instance (in addition to existing
`image_data`).

* **`verl/experimental/fully_async_policy/detach_utils.py`**

* Stop popping original batch fields in
`prepare_single_generation_data`.
* Set `agent_name` to `async_partial_tool_agent` or
`partial_single_turn_agent` depending on
`config.actor_rollout_ref.rollout.multi_turn.enable`.

## Testing

* Verified the fully async training entry can run successfully on 4 GPU
server setup (multi-turn enabled, partial rollout enabled, vLLM async
mode).


## Related

* Fixes and extends the scope of:
[4834](https://github.com/volcengine/verl/issues/4834)

* [ci] test: switch the vlm rl test case in the npu environment to use the model engine (#4844)

* [ckpt] fix: Megatron save ckpt after validation (#4841)

### What does this PR do?

This PR fixes a bug in the `save_checkpoint` function for
MegatronEngine. https://github.com/volcengine/verl/pull/4799 is a
similar PR, which modifies FSDPEngine.

In the original logic, if the model engine is used
(`use_legacy_worker_impl=disable`), the `wake_up` function in
`verl/workers/engine_workers.py` will be invoked during the rollout
phase of each step, which will offload the model to CPU.

Under normal circumstances, the `compute_log_prob` function called
during the training phase can load the model back to GPU. However, the
training process is not executed during the validation phase, leaving
the model on the CPU. If a checkpoint is saved immediately after
validation, it will trigger the following error: `AssertionError:
Expects tensor to be on the compute device cuda:0, was on cpu.`

To fix this bug, this PR checks whether the model is located on the CPU
before saving the checkpoint and loads it onto the GPU if that is the
case.

---------

Co-authored-by: weidongliang.339 <weidongliang.339@bytedance.com>

* [megatron] feat: Share actor and ref in LoRA (#4673)

For `compute_ref_log_prob`, we can do that by disabling lora layers
temporarily for the forward pass, as base weight are frozen and only
lora layers are trained.

This has already been supported in FSDP LoRA.

### What does this PR do?

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference …
vyomakesh0728 added a commit to vyomakesh0728/verl that referenced this pull request Jan 22, 2026
…_validate=True (volcengine#4683)

### What does this PR do?

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

User Trainer node to do validate process when run mode on fully-async,
It can save time for validate computing and reduce perf/time_of_step
peak
- add new use_trainer_do_validate on fully_async async_training config
to decide whether using trainer node to do validate process
- use_trainer_do_validate: default is false
- It can improve performance of validate, such as in
`dapo_7b_math_fsdp2_8_8.sh`, it can improve about 1X speed

<img width="1440" height="608" alt="image"
src="https://github.com/user-attachments/assets/436e481e-4f51-4e8e-ad08-b038b3f0e89d"
/>
<img width="1030" height="762" alt="image"
src="https://github.com/user-attachments/assets/ed8e3237-d37d-4eff-b944-fb81ea63f87c"
/>

- optimized the `process_validation_metrics()` on `_validate()` process,
when input datasets len=1444, it latency reduce from 150+s to 40+s

<img width="2630" height="448" alt="image"
src="https://github.com/user-attachments/assets/b6fb50bc-5856-49c1-91dc-f845e9c410b4"
/>
<img width="2504" height="518" alt="image"
src="https://github.com/user-attachments/assets/b3b5f238-0c5e-4c63-9683-83f34d5a46fd"
/>


### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

- on test scripts such as `dapo_7b_math_fsdp2_8_8.sh` add
`async_training.use_trainer_do_validate=True` command to do compute
- the result of this function on Qwen2.5-Math-7B model
- the baseline scripts is `dapo_7b_math_fsdp2_8_8.sh` 
- the optimized scripts is `dapo_7b_math_fsdp2_8_8.sh`
+`async_training.use_trainer_do_validate=True`
- the acc and perfomance is below:
<img width="1650" height="702" alt="image"
src="https://github.com/user-attachments/assets/3419d7bb-a64c-4fe9-b776-3312925f51ab"
/>
<img width="1580" height="522" alt="image"
src="https://github.com/user-attachments/assets/2c3a7e24-7421-4f12-8527-7b997f9c3b89"
/>

- green: optimized case (`async_training.use_trainer_do_validate=True` )
- gray: baseline case (`async_training.use_trainer_do_validate=False` )


### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
async_training.use_trainer_do_validate=True \
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: Shangwei-Li <lishangwei@mail.ustc.edu.cn>
sophiayyya pushed a commit to sophiayyya/verl that referenced this pull request Jan 25, 2026
…_validate=True (volcengine#4683)

### What does this PR do?

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

User Trainer node to do validate process when run mode on fully-async,
It can save time for validate computing and reduce perf/time_of_step
peak
- add new use_trainer_do_validate on fully_async async_training config
to decide whether using trainer node to do validate process
- use_trainer_do_validate: default is false
- It can improve performance of validate, such as in
`dapo_7b_math_fsdp2_8_8.sh`, it can improve about 1X speed

<img width="1440" height="608" alt="image"
src="https://github.com/user-attachments/assets/436e481e-4f51-4e8e-ad08-b038b3f0e89d"
/>
<img width="1030" height="762" alt="image"
src="https://github.com/user-attachments/assets/ed8e3237-d37d-4eff-b944-fb81ea63f87c"
/>

- optimized the `process_validation_metrics()` on `_validate()` process,
when input datasets len=1444, it latency reduce from 150+s to 40+s

<img width="2630" height="448" alt="image"
src="https://github.com/user-attachments/assets/b6fb50bc-5856-49c1-91dc-f845e9c410b4"
/>
<img width="2504" height="518" alt="image"
src="https://github.com/user-attachments/assets/b3b5f238-0c5e-4c63-9683-83f34d5a46fd"
/>


### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

- on test scripts such as `dapo_7b_math_fsdp2_8_8.sh` add
`async_training.use_trainer_do_validate=True` command to do compute
- the result of this function on Qwen2.5-Math-7B model
- the baseline scripts is `dapo_7b_math_fsdp2_8_8.sh` 
- the optimized scripts is `dapo_7b_math_fsdp2_8_8.sh`
+`async_training.use_trainer_do_validate=True`
- the acc and perfomance is below:
<img width="1650" height="702" alt="image"
src="https://github.com/user-attachments/assets/3419d7bb-a64c-4fe9-b776-3312925f51ab"
/>
<img width="1580" height="522" alt="image"
src="https://github.com/user-attachments/assets/2c3a7e24-7421-4f12-8527-7b997f9c3b89"
/>

- green: optimized case (`async_training.use_trainer_do_validate=True` )
- gray: baseline case (`async_training.use_trainer_do_validate=False` )


### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
async_training.use_trainer_do_validate=True \
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: Shangwei-Li <lishangwei@mail.ustc.edu.cn>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants