Skip to content

Commit d36edc9

Browse files
authored
update docs grpo vllm (#3831)
1 parent 6238a32 commit d36edc9

File tree

12 files changed

+21
-4
lines changed

12 files changed

+21
-4
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ Running Environment:
125125
| peft | >=0.11,<0.16 | ||
126126
| trl | >=0.13,<0.17 | 0.16 |RLHF|
127127
| deepspeed | >=0.14 | 0.14.5 | Training |
128-
| vllm | >=0.5.1 | 0.8.3 | Inference/Deployment/Evaluation |
128+
| vllm | >=0.5.1 | 0.7.3/0.8.3 | Inference/Deployment/Evaluation |
129129
| lmdeploy | >=0.5 | 0.7.2.post1 | Inference/Deployment/Evaluation |
130130
| evalscope | >=0.11 | | Evaluation |
131131

README_CN.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ pip install -e .
120120
| peft | >=0.11,<0.16 | ||
121121
| trl | >=0.13,<0.17 | 0.16 |RLHF|
122122
| deepspeed | >=0.14 | 0.14.5 |训练|
123-
| vllm | >=0.5.1 | 0.8.3 |推理/部署/评测|
123+
| vllm | >=0.5.1 | 0.7.3/0.8.3 |推理/部署/评测|
124124
| lmdeploy | >=0.5 | 0.7.2.post1 |推理/部署/评测|
125125
| evalscope | >=0.11 | |评测|
126126

docs/source/GetStarted/SWIFT安装.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ modelscope-registry.us-west-1.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu2
6969
| peft | >=0.11,<0.16 | ||
7070
| trl | >=0.13,<0.17 | 0.16 |RLHF|
7171
| deepspeed | >=0.14 | 0.14.5 |训练|
72-
| vllm | >=0.5.1 | 0.8.3 |推理/部署/评测|
72+
| vllm | >=0.5.1 | 0.7.3/0.8.3 |推理/部署/评测|
7373
| lmdeploy | >=0.5 | 0.7.2.post1 |推理/部署/评测|
7474
| evalscope | >=0.11 | |评测|
7575

docs/source/Instruction/GRPO.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -133,6 +133,7 @@ A conversation between User and Assistant. The user asks a question, and the Ass
133133
- move_model_batches: 在模型向vLLM/LMDeploy等快速推理框架移动参数时,将layers分为多少个batch. 默认为None, 代表整个模型不进行拆分,否则拆分为move_model_batches+1(非layer参数)+1(多模态部分参数)个
134134
- offload_optimizer: 是否在vLLM/LMDeploy推理时offload optimizer参数,默认为False
135135
- offload_model: 是否在vLLM/LMDeploy推理时offload 模型本身,默认为False
136+
- 注意:若该参数设置为True,训练时grad_norm一直为0,请安装`vllm==0.7.3`
136137
- gc_collect_after_offload: 是否在offload结束时进行gc(python gc和GPU gc),默认为False
137138
- multi_turn_func: 多轮GRPO参数, 传入对应的plugin名称, 同时在plugin/multi_turn.py中添加好对应的实现
138139
- mini_batch_size:用于将每个设备上的批次大小(per_device_batch)进一步切分为更小的子批次。为确保切分有效,per_device_batch 需要能够被 mini_batch_size 整除

docs/source/Instruction/命令行参数.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -413,6 +413,7 @@ reward模型参数将在PPO、GRPO中使用。
413413
- move_model_batches: 在模型向vLLM/LMDeploy等快速推理框架移动参数时,将layers分为多少个batch. 默认为None, 代表整个模型不进行拆分,否则拆分为move_model_batches+1(非layer参数)+1(多模态部分参数)个
414414
- offload_optimizer: 是否在vLLM/LMDeploy推理时offload optimizer参数,默认为False
415415
- offload_model: 是否在vLLM/LMDeploy推理时offload 模型本身,默认为False
416+
- 注意:若该参数设置为True,训练时grad_norm一直为0,请安装`vllm==0.7.3`
416417
- gc_collect_after_offload: 是否在offload结束时进行gc(python gc和GPU gc),默认为False
417418
- multi_turn_func: 多轮GRPO参数, 传入对应的plugin名称, 同时在plugin/multi_turn.py中添加好对应的实现
418419
- mini_batch_size:用于将每个设备上的批次大小(per_device_batch)进一步切分为更小的子批次。为确保切分有效,per_device_train_batch_size 需要能够被 mini_batch_size 整除

docs/source_en/GetStarted/SWIFT-installation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ More images can be found [here](https://modelscope.cn/docs/intro/environment-set
7070
| peft | >=0.11,<0.16 | | |
7171
| trl | >=0.13,<0.17 | 0.16 | RLHF |
7272
| deepspeed | >=0.14 | 0.14.5 | Training |
73-
| vllm | >=0.5.1 | 0.8.3 | Inference/Deployment/Evaluation |
73+
| vllm | >=0.5.1 | 0.7.3/0.8.3 | Inference/Deployment/Evaluation |
7474
| lmdeploy | >=0.5 | 0.7.2.post1 | Inference/Deployment/Evaluation |
7575
| evalscope | >=0.11 | | Evaluation |
7676

docs/source_en/Instruction/Command-line-parameters.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -424,6 +424,7 @@ The meanings of the following parameters can be referenced [here](https://huggin
424424
- move_model_batches: When moving model parameters to fast inference frameworks such as vLLM/LMDeploy, determines how many batches to divide the layers into. The default is `None`, which means the entire model is not split. Otherwise, the model is split into `move_model_batches + 1` (non-layer parameters) + `1` (multi-modal component parameters) batches.
425425
- offload_optimizer: Whether to offload optimizer parameters during inference with vLLM/LMDeploy. The default is `False`.
426426
- offload_model: Whether to offload the model itself during inference with vLLM/LMDeploy. The default is `False`.
427+
- Note: If this parameter is set to True and the grad_norm remains zero during training, please install vllm==0.7.3.
427428
- gc_collect_after_offload: Whether to perform garbage collection (both Python GC and GPU GC) after offloading. The default is `False`.
428429
- multi_turn_func: The multi turn GRPO plugin name. Add your multi-turn implementation in plugin/multi_turn.py
429430
- mini_batch_size: Used to further split the batch size on each device (per_device_batch) into smaller sub-batches. To ensure the split is valid, per_device_train_batch_size needs be divisible by mini_batch_size

docs/source_en/Instruction/GRPO.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -136,6 +136,7 @@ Arguments
136136
- move_model_batches: When moving model parameters to fast inference frameworks such as vLLM/LMDeploy, determines how many batches to divide the layers into. The default is `None`, which means the entire model is not split. Otherwise, the model is split into `move_model_batches + 1` (non-layer parameters) + `1` (multi-modal component parameters) batches.
137137
- offload_optimizer: Whether to offload optimizer parameters during inference with vLLM/LMDeploy. The default is `False`.
138138
- offload_model: Whether to offload the model itself during inference with vLLM/LMDeploy. The default is `False`.
139+
- Note: If this parameter is set to True and the grad_norm remains zero during training, please install vllm==0.7.3.
139140
- gc_collect_after_offload: Whether to perform garbage collection (both Python GC and GPU GC) after offloading. The default is `False`.
140141
- multi_turn_func: The multi turn GRPO plugin name. Add your multi-turn implementation in plugin/multi_turn.py
141142
- mini_batch_size: Used to further split the batch size on each device (per_device_batch) into smaller sub-batches. To ensure the split is valid, per_device_train_batch_size needs be divisible by mini_batch_size

examples/train/grpo/lora_qwenvl72b.sh

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,9 @@
11
# pip install math_verify # reward function
22
# GPU memory: 8 * 80GiB
33

4+
# Note: If the grad_norm remains zero during training,
5+
# please remove the `--offload_model true` parameter, or use `vllm==0.7.3`.
6+
47
MAX_PIXELS=602112 \
58
WANDB_API_KEY=xxx \
69
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \

examples/train/grpo/train_72b_4gpu.sh

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,8 @@
11
# 4*80G GPU
2+
3+
# Note: If the grad_norm remains zero during training,
4+
# please remove the `--offload_model true` parameter, or use `vllm==0.7.3`.
5+
26
CUDA_VISIBLE_DEVICES=0,1,2,3 \
37
NPROC_PER_NODE=4 \
48
swift rlhf \

0 commit comments

Comments
 (0)