Skip to content

Conversation

@JYMiracle305
Copy link
Contributor

@JYMiracle305 JYMiracle305 commented Nov 4, 2025

1. 主要修改:

新增支持Pipeline Parallel(PP)特性,支持把原始model按最外层结构平均分配到多个rank,各个rank持有部分layer以及相应parameters,训练过程forward按计算图顺序从rank 0向rank n计算,barkward从rank n向rank 0计算,各个rank之间通过点对点通信。

net.cc: 根据PP_size和pp_rank在构建模型时构建属于本rank的子块和对应参数。
pipeline_parallel.cc: PipelineParallel封装model,每个rank对应1个PipelineParallel,完成关联PipelineSchedule、PipelineStage、Optimizers,提供TrainStep函数为训练入口,调用调度器的训练方法Step。
pipeline_schedule.cc: PipelineSchedule调度器基类,提供Step函数为完整一轮训练的方法;ScheduleGPipe为调度器子类GPipe实现(示意图如下),StepMicrobatches为调度具体实现。
image
pipeline_stage.cc: PipelineStage表示当前rank所持有的子图,提供ForwardOneChunk方法执行当前子图内部的forward的计算。
send_recv.cc:ISend和IRecv是两个autograd节点,用于在rank间定向发送张量,依赖于autograd机制,在rank x的反向中最后一步为发送梯度到rank x-1,然后调用rank x-1上的ISend::Backward接收梯度,并开始rank x-1的反向过程

2. 命令参数:

--pipeline_parallel #uint32类型,表示打开pipeline parallel,参数值为并行的设备数,即stage数量

示例:
./llama3 --input_bin <input_path> --llmc_filepath <model_path> --device cuda --nthread_per_process 8 --batch_size 10 --total_batch_size 5120 --num_iteration 10 --pipeline_parallel 8

@JYMiracle305 JYMiracle305 force-pushed the add_pp branch 2 times, most recently from 7552d8e to 1083190 Compare November 5, 2025 09:20
@JYMiracle305 JYMiracle305 force-pushed the add_pp branch 6 times, most recently from 09735f2 to c21cd75 Compare November 17, 2025 02:16
@JYMiracle305
Copy link
Contributor Author

JYMiracle305 commented Nov 17, 2025

GPT2:
相较于单卡,PP配置下,不采用调度(num_micro_batch==1)性能持平,num_micro_batch==2吞吐量约为1.45倍,num_micro_batch==4吞吐量约为1.92倍,num_micro_batch==8吞吐量约为2.17倍。

单卡:

image

PP Num_micro_batch 1

image

PP Num_micro_batch 2

image

PP Num_micro_batch 4

image

PP Num_micro_batch 8

image

@JYMiracle305
Copy link
Contributor Author

JYMiracle305 commented Nov 17, 2025

LLaMA3:
相较于单卡,PP配置下,不采用调度(num_micro_batch==1)性能持平,num_micro_batch==2吞吐量约为1.55倍,num_micro_batch==4吞吐量约为2.04倍,num_micro_batch==8吞吐量约为2.35倍。

单卡:

image

PP Num_micro_batch 1:

img_v3_02s4_d8a664a4-30c2-4f36-a808-dfeb6170f91g

PP Num_micro_batch 2:

image

PP Num_micro_batch 4:

Uploading image.png…

PP Num_micro_batch 8:

img_v3_02s4_d888cfa5-f4b1-4eae-b848-2c1b6a9006eg

@JYMiracle305 JYMiracle305 force-pushed the add_pp branch 3 times, most recently from 9354e7a to 47e0573 Compare November 17, 2025 06:42

void SGD::Step() {
for (auto param : params_) {
if (!param->grad()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个情况会出现吗?如果要做非空检查的话是不是后面的 Adam::Step 也得加上?

std::vector<std::shared_ptr<Tensor>> target_mbs(num_micro_batches_);
if (stage_->IsFirstStage()) {
{
autograd::NoGradGuard no_grad;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

check 一下 no_grad 是否有必要,全模型的输入输出 input/target 的 requires_grad 是 false,相关操作不会构图


if (stage_->IsLastStage()) {
{
autograd::NoGradGuard no_grad;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上


std::vector<std::shared_ptr<Tensor>> outputs;
for (auto t : input_tensors) { outputs.push_back(t); }
return outputs;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是不是可以原地返回 input_tensors,没有必要再构造一个 outputs

@JYMiracle305 JYMiracle305 force-pushed the add_pp branch 2 times, most recently from 3391267 to b851306 Compare November 17, 2025 14:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants