Skip to content

Commit d66d844

Browse files
author
Yancey
authored
Refine async update design doc (#10065)
* refine async update design doc * update by comments
1 parent ded2153 commit d66d844

File tree

1 file changed

+18
-15
lines changed

1 file changed

+18
-15
lines changed

doc/fluid/design/dist_train/async_update.md

Lines changed: 18 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -4,34 +4,37 @@
44

55
For the typical synchronous distributed training, some significant steps are as follows:
66

7-
1. A Trainer will compute the gradients and SEND them to the Parameter Server(PServer) nodes.
8-
1. After the PServer node received gradients came from all the Trainers, It will aggregate the
7+
1. A trainer process will compute the gradients and **send** them to the parameter server (PS) nodes.
8+
1. After the PS node received gradients came from all the Trainers, It will aggregate the
99
gradient variables for the same parameter into one gradient variable and then apply the aggregated
1010
gradient to the respective parameter, finally using an optimize algorithms(SGD, Monument...)
1111
to update the parameters.
12-
1. The Trainer would wait for the PServers finished the optimize stage, and GET the parameters from PServer,
12+
1. The Trainer would wait for the PS finished the optimize stage, and GET the parameters from PS,
1313
so all the Trainers would get the same parameters.
1414

15-
In the synchronously distributed training, there should be a `Barrier` to synchronise the
16-
parameters after the optimizing stage. The performance of a distributed training job would
17-
depend on the slowest node if there were hundreds or thousands of training nodes in a
18-
Job, the performance of synchronously distributed training might be very poor because of
19-
the slow node. So this design doc would introduce an approach to implement
20-
*asynchronously* distributed training in PaddlePaddle Fluid.
15+
In Synchronous Distributed Training, there is a **barrier** on each PS to wait until all trainers processes
16+
have completed running current mini-batch. After that, all trainers can continue to run the next
17+
mini-batch. So, we can find that the overall performance of Synchronous Distributed Training depends
18+
on the slowest node.
19+
20+
In Asynchronous Distributed Training, we don't need to wait for a global mini-bach, the optimizer on
21+
the PS will run immediately when the gradient is uploaded to the PS from one trainer. This mode would
22+
train such models that achieve scaling, better throughput. In this design doc, we will introduce how to
23+
implement the Asynchronous Distributed Training base on PaddlePaddle Fluid.
2124

2225
## Design
2326

2427
<img src="./src/async_update.png" width="600"/>
2528

26-
As the figure above, we describe a global view of asynchronously update process and use
29+
As the figure above, we describe a global view of the asynchronous update process and use
2730
the parameter `w1` as an example to introduce the steps:
2831
1. For each gradient variables, they may distribute on different GPU card and aggregate
2932
them while they are all calculated.
30-
1. Split the gradient variable into multiple blocks according to the number of PServer
33+
1. Split the gradient variable into multiple blocks according to the number of PS
3134
instances and then send them.
32-
1. PServer would run an `Optimize Block` using a specified optimize algorithm to update
35+
1. PS would run an `Optimize Block` using a specified optimize algorithm to update
3336
the specified parameter.
34-
1. The trainer will fetch latest parameter from PServer before running forward Op which depends
37+
1. The trainer will fetch the latest parameter from PS before running forward Op which depends
3538
on the specified parameter.
3639
1. Broadcast the received variable into multiple GPU cards and continue to run the next
3740
mini-batch.
@@ -40,8 +43,8 @@ mini-batch.
4043

4144
- For the multiple devices distributed training, we need to aggregate the gradient
4245
variables which placed on different devices firstly and then schedule a `SendVars` Operator to
43-
send the gradient variables to the multiple PServer instances.
44-
- Schedule `FetchVars` operator to fetch the latest parameter from PServer before running
46+
send the gradient variables to the multiple PS instances.
47+
- Schedule `FetchVars` operator to fetch the latest parameter from PS before running
4548
the forward ops.
4649
- There could be a large number of gradient variables to be sent, so we need to use another
4750
thread pool(IO Threadpool) whose a number of the schedulable threads is larger than the

0 commit comments

Comments
 (0)