Skip to content

Commit 1c5116d

Browse files
committed
Merge remote-tracking branch 'origin/main' into fix-mega
2 parents 6362eb0 + c14132c commit 1c5116d

File tree

25 files changed

+169
-99
lines changed

25 files changed

+169
-99
lines changed

.gitignore

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -130,6 +130,7 @@ result.mp4
130130
output/
131131
outputs/
132132
wandb/
133+
swanlog/
133134
*.out
134135
benchmarks/
135136
eval_output/
@@ -142,6 +143,9 @@ result/
142143
images
143144
/custom/
144145
megatron_output/
146+
/*-mcore/
147+
/*-hf/
148+
/*_cached_dataset/
145149

146150
# Pytorch
147151
*.pth

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@ Running Environment:
136136
| modelscope | >=1.23 | | |
137137
| peft | >=0.11,<0.19 | | |
138138
| flash_attn | | 2.8.3/3.0.0b1 | |
139-
| trl | >=0.15,<0.25 | 0.23.1 | RLHF |
139+
| trl | >=0.15,<0.25 | 0.24.0 | RLHF |
140140
| deepspeed | >=0.14 | 0.17.6 | Training |
141141
| vllm | >=0.5.1 | 0.11.0 | Inference/Deployment |
142142
| sglang | >=0.4.6 | 0.5.5.post3 | Inference/Deployment |

README_CN.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ pip install -e .
131131
| modelscope | >=1.23 | | |
132132
| peft | >=0.11,<0.19 | | |
133133
| flash_attn | | 2.8.3/3.0.0b1 | |
134-
| trl | >=0.15,<0.25 | 0.23.1 | RLHF |
134+
| trl | >=0.15,<0.25 | 0.24.0 | RLHF |
135135
| deepspeed | >=0.14 | 0.17.6 | 训练 |
136136
| vllm | >=0.5.1 | 0.11.0 | 推理/部署 |
137137
| sglang | >=0.4.6 | 0.5.5.post3 | 推理/部署 |

docs/source/GetStarted/SWIFT-installation.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -33,10 +33,10 @@ pip install -e .
3333

3434
docker可以查看[这里](https://github.com/modelscope/modelscope/blob/build_swift_image/docker/build_image.py#L347)
3535
```
36-
# swift3.11.1
37-
modelscope-registry.cn-hangzhou.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.1
38-
modelscope-registry.cn-beijing.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.1
39-
modelscope-registry.us-west-1.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.1
36+
# swift3.11.3
37+
modelscope-registry.cn-hangzhou.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.3
38+
modelscope-registry.cn-beijing.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.3
39+
modelscope-registry.us-west-1.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.3
4040
4141
# swift3.10.3
4242
modelscope-registry.cn-hangzhou.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.8.1-py311-torch2.8.0-vllm0.11.0-modelscope1.31.0-swift3.10.3
@@ -111,7 +111,7 @@ modelscope-registry.us-west-1.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu2
111111
| modelscope | >=1.23 | | |
112112
| peft | >=0.11,<0.19 | | |
113113
| flash_attn | | 2.8.3/3.0.0b1 | |
114-
| trl | >=0.15,<0.25 | 0.23.1 | RLHF |
114+
| trl | >=0.15,<0.25 | 0.24.0 | RLHF |
115115
| deepspeed | >=0.14 | 0.17.6 | 训练 |
116116
| vllm | >=0.5.1 | 0.11.0 | 推理/部署 |
117117
| sglang | >=0.4.6 | 0.5.5.post3 | 推理/部署 |

docs/source/Instruction/Command-line-parameters.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -466,7 +466,7 @@ Vera使用`target_modules`、`target_regex`、`modules_to_save`三个参数,
466466
- add_version: 在output_dir上额外增加目录`'<版本号>-<时间戳>'`防止权重覆盖,默认为True。
467467
- check_model: 检查本地模型文件有损坏或修改并给出提示,默认为True。**如果是断网环境,请设置为False**
468468
- 🔥create_checkpoint_symlink: 额外创建checkpoint软链接,方便书写自动化训练脚本。best_model和last_model的软链接路径分别为f'{output_dir}/best'和f'{output_dir}/last'。
469-
- 🔥packing: 将不同长度的数据样本打包成**近似**统一长度的样本(packing能保证不对完整的序列进行切分),实现训练时各节点与进程的负载均衡(避免长文本拖慢短文本的训练速度),从而提高GPU利用率,保持显存占用稳定。当使用 `--attn_impl flash_attn` 时,可确保packed样本内的不同序列之间相互独立,互不可见。该参数默认为`False`,目前支持 CPT/SFT/DPO/KTO/GKD。注意:**packing会导致数据集样本数减少,请自行调节梯度累加数和学习率**
469+
- 🔥packing: 使用`padding_free`的方式将不同长度的数据样本打包成**近似**统一长度的样本(packing能保证不对完整的序列进行切分),实现训练时各节点与进程的负载均衡(避免长文本拖慢短文本的训练速度),从而提高GPU利用率,保持显存占用稳定。当使用 `--attn_impl flash_attn` 时,可确保packed样本内的不同序列之间相互独立,互不可见。该参数默认为`False`,目前支持 CPT/SFT/DPO/KTO/GKD。注意:**packing会导致数据集样本数减少,请自行调节梯度累加数和学习率**
470470
- "ms-swift>=3.12"新支持了embedding/reranker/seq_cls任务的packing。
471471
- packing_length: packing的长度。默认为None,设置为max_length。
472472
- packing_num_proc: packing的进程数,默认为1。需要注意的是,不同的`packing_num_proc`,最终形成的packed数据集是不同的。(该参数在流式packing时不生效)。通常不需要修改该值,packing速度远快于tokenize速度。

docs/source/Megatron-SWIFT/Command-line-parameters.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -139,9 +139,10 @@
139139
- log_validation_ppl_to_tensorboard: 将验证困惑度写入tensorboard。默认为True。
140140
- log_memory_to_tensorboard: 将内存日志写入tensorboard。默认为True。
141141
- logging_level: 日志级别。默认为None。
142-
- wandb_project: wandb 项目名称。默认为'',即忽略wandb。
143-
- wandb_exp_name: wandb 实验名称。默认为''。
144-
- wandb_save_dir: 本地保存 wandb 结果的路径。默认为''。
142+
- report_to: (ms-swift>=3.12) 启用的日志后端。默认为None。可选项为'wandb'和'swanlab'。(tensorboard会一直启动)。登陆可以使用`WANDB_API_KEY``SWANLAB_API_KEY`环境变量。
143+
- wandb_project: wandb/swanlab 项目名称,取决于`report_to`。默认为'megatron-swift'。
144+
- wandb_exp_name: wandb/swanlab 实验名称。默认为`--save`的值。
145+
- wandb_save_dir: 本地保存 wandb/swanlab 结果的路径。默认为None,即存储在`f'{args.save}/wandb'``f'{args.save}/swanlab'`
145146

146147
**评估参数**:
147148
- 🔥eval_iters: 评估的迭代次数,默认为`-1`,根据验证数据集的数量设置合适的值。**若验证集数量少于global_batch_size,则不进行评估**。若使用流式数据集,该值需要手动设置。
@@ -299,7 +300,7 @@ Megatron训练参数继承自Megatron参数和基本参数(**与ms-swift共用
299300
- 提示:在日志中打印的"learning rate"为llm的学习率。
300301
- aligner_lr: 当训练多模态大模型时,该参数指定aligner的学习率,默认为None,等于learning_rate。
301302
- gradient_checkpointing_kwargs: 传入`torch.utils.checkpoint`中的参数。例如设置为`--gradient_checkpointing_kwargs '{"use_reentrant": false}'`。默认为None。该参数只对`vit_gradient_checkpointing`生效。
302-
- 🔥packing: 将不同长度的数据样本打包成**近似**统一长度的样本(packing能保证不对完整的序列进行切分),实现训练时各节点与进程的负载均衡(避免长文本拖慢短文本的训练速度),从而提高GPU利用率,保持显存占用稳定。当使用 `--attention_backend flash` 时,可确保packed样本内的不同序列之间相互独立,互不可见(除Qwen3-Next,因为含有linear-attention)。该参数默认为`False`。Megatron-SWIFT的所有训练任务都支持该参数。注意:**packing会导致数据集样本数减少,请自行调节梯度累加数和学习率**
303+
- 🔥packing: 使用`padding_free`的方式将不同长度的数据样本打包成**近似**统一长度的样本(packing能保证不对完整的序列进行切分),实现训练时各节点与进程的负载均衡(避免长文本拖慢短文本的训练速度),从而提高GPU利用率,保持显存占用稳定。当使用 `--attention_backend flash` 时,可确保packed样本内的不同序列之间相互独立,互不可见(除Qwen3-Next,因为含有linear-attention)。该参数默认为`False`。Megatron-SWIFT的所有训练任务都支持该参数。注意:**packing会导致数据集样本数减少,请自行调节梯度累加数和学习率**
303304
- packing_length: packing的长度。默认为None,设置为max_length。
304305
- packing_num_proc: packing的进程数,默认为1。需要注意的是,不同的`packing_num_proc`,最终形成的packed数据集是不同的。(该参数在流式packing时不生效)。通常不需要修改该值,packing速度远快于tokenize速度。
305306
- streaming: 流式读取并处理数据集,默认False。(流式数据集的随机并不彻底,可能导致loss波动剧烈。)

docs/source/Megatron-SWIFT/Quick-start.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -53,9 +53,9 @@ MAX_JOBS=8 pip install "flash-attn==2.8.3" --no-build-isolation
5353

5454
或者你也可以使用镜像:(历史镜像查看[这里](../GetStarted/SWIFT-installation.md#镜像)
5555
```
56-
modelscope-registry.cn-hangzhou.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.1
57-
modelscope-registry.cn-beijing.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.1
58-
modelscope-registry.us-west-1.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.1
56+
modelscope-registry.cn-hangzhou.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.3
57+
modelscope-registry.cn-beijing.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.3
58+
modelscope-registry.us-west-1.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.3
5959
```
6060

6161
推荐运行环境:

docs/source_en/GetStarted/SWIFT-installation.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -33,10 +33,10 @@ pip install -e .
3333

3434
You can check Docker [here](https://github.com/modelscope/modelscope/blob/build_swift_image/docker/build_image.py#L347).
3535
```
36-
# swift3.11.1
37-
modelscope-registry.cn-hangzhou.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.1
38-
modelscope-registry.cn-beijing.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.1
39-
modelscope-registry.us-west-1.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.1
36+
# swift3.11.3
37+
modelscope-registry.cn-hangzhou.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.3
38+
modelscope-registry.cn-beijing.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.3
39+
modelscope-registry.us-west-1.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.9.1-py311-torch2.8.0-vllm0.11.0-modelscope1.32.0-swift3.11.3
4040
4141
# swift3.10.3
4242
modelscope-registry.cn-hangzhou.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.8.1-py311-torch2.8.0-vllm0.11.0-modelscope1.31.0-swift3.10.3
@@ -111,7 +111,7 @@ More images can be found [here](https://modelscope.cn/docs/intro/environment-set
111111
| modelscope | >=1.23 | | |
112112
| peft | >=0.11,<0.19 | | |
113113
| flash_attn | | 2.8.3/3.0.0b1 | |
114-
| trl | >=0.15,<0.25 | 0.23.1 | RLHF |
114+
| trl | >=0.15,<0.25 | 0.24.0 | RLHF |
115115
| deepspeed | >=0.14 | 0.17.6 | Training |
116116
| vllm | >=0.5.1 | 0.11.0 | Inference/Deployment |
117117
| sglang | >=0.4.6 | 0.5.5.post3 | Inference/Deployment |

docs/source_en/Instruction/Command-line-parameters.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -476,7 +476,7 @@ Training arguments include the [base arguments](#base-arguments), [Seq2SeqTraine
476476
- add_version: Add directory to output_dir with `'<version>-<timestamp>'` to prevent weight overwrite, default is True.
477477
- check_model: Check local model files for corruption or modification and give a prompt, default is True. **If in an offline environment, please set to False.**
478478
- 🔥create_checkpoint_symlink: Creates additional checkpoint symlinks to facilitate writing automated training scripts. The symlink paths for `best_model` and `last_model` are `f'{output_dir}/best'` and `f'{output_dir}/last'` respectively.
479-
- 🔥packing: Packs data samples of different lengths into samples of **approximately** uniform length (packing ensures that complete sequences are not split), achieving load balancing across nodes and processes during training (preventing long texts from slowing down short text training), thereby improving GPU utilization and maintaining stable memory usage. When using `--attn_impl flash_attn`, it ensures that different sequences within packed samples remain independent and invisible to each other. This parameter defaults to `False` and currently supports CPT/SFT/DPO/KTO/GKD. Note: **packing will reduce the number of dataset samples, please adjust gradient accumulation steps and learning rate accordingly**.
479+
- 🔥packing: Use the `padding_free` method to pack data samples of different lengths into samples of **approximately** uniform length (packing ensures that complete sequences are not split), achieving load balancing across nodes and processes during training (preventing long texts from slowing down short text training), thereby improving GPU utilization and maintaining stable memory usage. When using `--attn_impl flash_attn`, it ensures that different sequences within packed samples remain independent and invisible to each other. This parameter defaults to `False` and currently supports CPT/SFT/DPO/KTO/GKD. Note: **packing will reduce the number of dataset samples, please adjust gradient accumulation steps and learning rate accordingly**.
480480
- "ms-swift>=3.12" has newly added support for packing in embedding/reranker/seq_cls tasks.
481481
- packing_length: the length to use for packing. Defaults to None, in which case it is set to max_length.
482482
- packing_num_proc: Number of processes for packing, default is 1. Note that different values of `packing_num_proc` will result in different packed datasets. (This parameter does not take effect during streaming packing). Usually there is no need to modify this value, as packing speed is much faster than tokenization speed.

docs/source_en/Megatron-SWIFT/Command-line-parameters.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -145,9 +145,10 @@ For guidance on selecting parallelization strategies, please refer to the [Train
145145
- log_validation_ppl_to_tensorboard: Writes validation perplexity to TensorBoard. Default is True.
146146
- log_memory_to_tensorboard: Writes memory logs to TensorBoard. Default is True.
147147
- logging_level: Logging level. Default is None.
148-
- wandb_project: The name of the wandb project. Defaults to '', which means ignoring wandb.
149-
- wandb_exp_name: The name of the wandb experiment. Defaults to ''.
150-
- wandb_save_dir: The local path to save wandb results. Defaults to ''.
148+
- report_to: (ms-swift>=3.12) The logging backend to enable. Defaults to None. Options are 'wandb' and 'swanlab'. (TensorBoard will always be started). Login can be done using the `WANDB_API_KEY` or `SWANLAB_API_KEY` environment variables.
149+
- wandb_project: The wandb/swanlab project name, depending on `report_to`. Defaults to 'megatron-swift'.
150+
- wandb_exp_name: The wandb/swanlab experiment name. Defaults to the value of `--save`.
151+
- wandb_save_dir: The path to save wandb/swanlab results locally. Default is None, which means it will be stored in `f'{args.save}/wandb'` or `f'{args.save}/swanlab'`.
151152

152153
**Evaluation Parameters**:
153154

@@ -318,7 +319,7 @@ Megatron training parameters are inherited from Megatron parameters and basic pa
318319
- Note: The "learning rate" printed in the logs is the learning rate of the LLM.
319320
- aligner_lr: Specifies the learning rate for the aligner module in multimodal models. Default is `None`, same as `learning_rate`.
320321
- gradient_checkpointing_kwargs: Arguments passed to `torch.utils.checkpoint`. For example: set `--gradient_checkpointing_kwargs '{"use_reentrant": false}'`. Defaults to `None`. This parameter only takes effect when `vit_gradient_checkpointing` is enabled.
321-
- 🔥packing: Packs data samples of different lengths into samples of **approximately** uniform length (packing ensures that complete sequences are not split), achieving load balancing across nodes and processes during training (preventing long texts from slowing down short text training), thereby improving GPU utilization and maintaining stable memory usage. When using `--attention_backend flash`, it ensures that different sequences within packed samples remain independent and invisible to each other (except for Qwen3-Next, which contains linear-attention). This parameter defaults to `False`. All training tasks in Megatron-SWIFT support this parameter. Note: **packing will reduce the number of dataset samples, please adjust gradient accumulation steps and learning rate accordingly**.
322+
- 🔥packing: Use the `padding_free` method to pack data samples of different lengths into samples of **approximately** uniform length (packing ensures that complete sequences are not split), achieving load balancing across nodes and processes during training (preventing long texts from slowing down short text training), thereby improving GPU utilization and maintaining stable memory usage. When using `--attention_backend flash`, it ensures that different sequences within packed samples remain independent and invisible to each other (except for Qwen3-Next, which contains linear-attention). This parameter defaults to `False`. All training tasks in Megatron-SWIFT support this parameter. Note: **packing will reduce the number of dataset samples, please adjust gradient accumulation steps and learning rate accordingly**.
322323
- packing_length: the length to use for packing. Defaults to None, in which case it is set to max_length.
323324
- packing_num_proc: Number of processes for packing, default is 1. Note that different values of `packing_num_proc` will result in different packed datasets. (This parameter does not take effect during streaming packing). Usually there is no need to modify this value, as packing speed is much faster than tokenization speed.
324325
- streaming: Stream data loading and processing, default is False. (The shuffling of streaming datasets is not thorough, which may lead to severe loss fluctuations.)

0 commit comments

Comments
 (0)