Skip to content

Commit 93276fd

Browse files
committed
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into add_parallel_executor_tests
2 parents ab86fb1 + b53f7e2 commit 93276fd

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

51 files changed

+1884
-49
lines changed

CMakeLists.txt

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,7 @@ option(WITH_GPU "Compile PaddlePaddle with NVIDIA GPU" ${CUDA_F
3939
option(WITH_AMD_GPU "Compile PaddlePaddle with AMD GPU" OFF)
4040
option(WITH_AVX "Compile PaddlePaddle with AVX intrinsics" ${AVX_FOUND})
4141
option(WITH_MKL "Compile PaddlePaddle with MKL support." ${AVX_FOUND})
42+
option(WITH_TENSORRT "Compile PaddlePaddle with TensorRT support." OFF)
4243
option(WITH_DSO "Compile PaddlePaddle with dynamic linked CUDA" ON)
4344
option(WITH_TESTING "Compile PaddlePaddle with unit testing" OFF)
4445
option(WITH_SWIG_PY "Compile PaddlePaddle with inference api" ON)
@@ -181,6 +182,11 @@ if(WITH_GPU)
181182
include(cuda)
182183
endif(WITH_GPU)
183184

185+
# TensorRT depends on GPU.
186+
if (NOT WITH_GPU)
187+
set(WITH_TENSORRT OFF)
188+
endif()
189+
184190
if(WITH_AMD_GPU)
185191
find_package(HIP)
186192
include(hip)

Dockerfile

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,13 @@ ENV PATH=${PATH}:${GOROOT}/bin:${GOPATH}/bin
4545
# install glide
4646
RUN curl -s -q https://glide.sh/get | sh
4747

48+
# Install TensorRT
49+
# The unnecessary files has been removed to make the library small.
50+
RUN wget -qO- http://paddlepaddledeps.bj.bcebos.com/TensorRT-4.0.0.3.Ubuntu-16.04.4.x86_64-gnu.cuda-8.0.cudnn7.0.tar.gz | \
51+
tar -xz -C /usr/local && \
52+
cp -rf /usr/local/TensorRT/include /usr && \
53+
cp -rf /usr/local/TensorRT/lib /usr
54+
4855
# git credential to skip password typing
4956
RUN git config --global credential.helper store
5057

Dockerfile.android

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ RUN git config --global credential.helper store
2727
# Fix locales to en_US.UTF-8
2828
RUN localedef -i en_US -f UTF-8 en_US.UTF-8
2929

30-
RUN pip install --upgrade pip && \
30+
RUN pip install --upgrade pip==9.0.3 && \
3131
pip install -U 'protobuf==3.1.0' && \
3232
pip install -U wheel sphinx && \
3333
pip install pre-commit

cmake/external/grpc.cmake

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ ExternalProject_Add(
3333
extern_grpc
3434
DEPENDS protobuf zlib
3535
GIT_REPOSITORY "https://github.com/grpc/grpc.git"
36-
GIT_TAG "v1.11.x"
36+
GIT_TAG "v1.10.x"
3737
PREFIX ${GRPC_SOURCES_DIR}
3838
UPDATE_COMMAND ""
3939
CONFIGURE_COMMAND ""

doc/CMakeLists.txt

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,9 @@ add_custom_target(paddle_apis ALL
33

44
add_custom_target(paddle_docs ALL
55
DEPENDS paddle_v2_docs paddle_v2_docs_cn
6-
paddle_fluid_docs paddle_fluid_docs_cn)
6+
paddle_fluid_docs paddle_fluid_docs_cn
7+
paddle_mobile_docs paddle_mobile_docs_cn)
78

89
add_subdirectory(v2)
910
add_subdirectory(fluid)
11+
add_subdirectory(mobile)

doc/fluid/api/layers.rst

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -473,6 +473,12 @@ multiplex
473473
.. autofunction:: paddle.fluid.layers.multiplex
474474
:noindex:
475475

476+
label_smooth
477+
------------
478+
479+
.. autofunction:: paddle.fluid.layers.label_smooth
480+
:noindex:
481+
476482
ops
477483
===
478484

doc/fluid/design/concepts/parallel_executor.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ Running an operator can be asynchronized. There is a thread pool to execute an `
8484
8585
## Synchronize GPU Kernels
8686
87-
The GPU is a non-blocking device. The different streams need be synchronized when switing streams. In current implementation, the synchronization based on the following algorithm:
87+
The GPU is a non-blocking device. The different streams need be synchronized when switching streams. In current implementation, the synchronization based on the following algorithm:
8888
8989
1. `OpHandle` will record `DeviceContext` that it is used.
9090
2. In `OpHandle::Run`, if the `DeviceContext` of current operator is different from `DeviceContext` of any input variable, just wait the generate operator of this input variable.

doc/fluid/design/dist_train/README.md

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
## Distributed training overview doc
2+
3+
Currently Paddle Fluid use parameter server architecture to support distributed training.
4+
5+
For synchronous and asynchronous training, the differences are mostly in the logic of parameter server. Now we have already support synchronous training.
6+
7+
### Synchronous training
8+
9+
The training process of synchronous training is:
10+
11+
![synchronous distributed training](./src/sync_distributed_training.png)
12+
13+
1. Pserver
14+
1. set `barrier_condition_` to 0 and waits for trainers to send gradient.
15+
1. Trainer
16+
1. Trainer read minibatch of data, run forward-backward with local parameter copy and get the gradients for parameters.
17+
1. Trainer use split op to split all the gradient into blocks. The split method is determined at compile time.
18+
1. Trainer use send_op to send all the split gradients to corresponding parameter server.
19+
1. After trainer send all the gradients, it will send a `BATCH_BARRIER_MESSAGE` to all pservers.
20+
1. Trainer call GetVariable to pserver and wait for `barrier_condition_` on pserver to be 1.
21+
1. Pserver
22+
1. Pserver will count the number of `BATCH_BARRIER_MESSAGE`.
23+
1. When the count of `BATCH_BARRIER_MESSAGE` is equal to the number of Trainer. Pserver thinks it received all gradient from all trainers.
24+
1. Pserver will run the optimization block to optimize the parameters.
25+
1. After optimization, pserver set `barrier_condition_` to 1.
26+
1. Pserver wait for `FETCH_BARRIER_MESSAGE`.
27+
1. Trainer.
28+
1. The trainer uses GetVariable to get all the parameters from pserver.
29+
1. Trainer sends a `FETCH_BARRIER_MESSAGE` to each pserver.
30+
1. Pserver.
31+
1. when the number of `FETCH_BARRIER_MESSAGE` reach the number of all trainers. Pserver think all the parameters have been got. it will go back to 1. to set `barrier_condition_` to 0.
32+
33+
### Asynchronous training
34+
In the above process. There are two barriers for all trainers to synchronize with each other. In asynchronous training, these two barriers are not needed. The trainer can just send gradients to pserver and then get parameters back.
35+
36+
The training process of asynchronous training can be:
37+
38+
![asynchronous distributed training](./src/async_distributed_training.png)
39+
40+
1. Pserver:
41+
1. Each parameter has a queue to receive its gradient from trainers.
42+
1. Each parameter has a thread to read data from the queue and run optimize block, using the gradient to optimize the parameter.
43+
1. Using an independent thread to handle RPC call `GetVariable` for trainers to get parameters back.(Maybe here we should use a thread pool to speed up fetching the parameters.)
44+
45+
1. Trainer:
46+
1. Trainer read a batch of data. Run forward and backward with local parameter copy and get the gradients for parameters.
47+
1. Trainer split all gradients to blocks and then send these gradient blocks to pservers(pserver will put them into the queue).
48+
2. Trainer gets all parameters back from pserver.
49+
50+
### Note:
51+
There are also some conditions that need to consider. For exmaple:
52+
53+
1. If trainer needs to wait for the pserver to apply it's gradient and then get back the parameters back.
54+
1. If we need a lock between parameter update and parameter fetch.
55+
1. If one parameter must be on one server, or it can also be split and send to multiple parameter servers.
56+
57+
The above architecture of asynchronous training can support different mode, we can have a detailed test in the future for these problems.
Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
# Design Doc: Asynchronous Update With Distributed Training
2+
3+
## Background
4+
5+
For the typical synchronous distributed training, some significant steps are as follows:
6+
7+
1. A Trainer will compute the gradients and SEND them to the Parameter Server(PServer) nodes.
8+
1. After the PServer node received gradients came from all the Trainers, It will aggregate the
9+
gradient variables for the same parameter into one gradient variable and then apply the aggregated
10+
gradient to the respective parameter, finally using an optimize algorithms(SGD, Monument...)
11+
to update the parameters.
12+
1. The Trainer would wait for the PServers finished the optimize stage, and GET the parameters from PServer,
13+
so all the Trainers would get the same parameters.
14+
15+
In the synchronously distributed training, there should be a `Barrier` to synchronise the
16+
parameters after the optimizing stage. The performance of a distributed training job would
17+
depend on the slowest node if there were hundreds or thousands of training nodes in a
18+
Job, the performance of synchronously distributed training might be very poor because of
19+
the slow node. So this design doc would introduce an approach to implement
20+
*asynchronously* distributed training in PaddlePaddle Fluid.
21+
22+
## Design
23+
24+
<img src="./src/async_update.png" width="600"/>
25+
26+
As the figure above, we describe a global view of asynchronously update process and use
27+
the parameter `w1` as an example to introduce the steps:
28+
1. For each gradient variables, they may distribute on different GPU card and aggregate
29+
them while they are all calculated.
30+
1. Split the gradient variable into multiple blocks according to the number of PServer
31+
instances and then send them.
32+
1. PServer would run an `Optimize Block` using a specified optimize algorithm to update
33+
the specified parameter.
34+
1. The trainer will fetch latest parameter from PServer before running forward Op which depends
35+
on the specified parameter.
36+
1. Broadcast the received variable into multiple GPU cards and continue to run the next
37+
mini-batch.
38+
39+
### Trainer
40+
41+
- For the multiple devices distributed training, we need to aggregate the gradient
42+
variables which placed on different devices firstly and then schedule a `SendVars` Operator to
43+
send the gradient variables to the multiple PServer instances.
44+
- Schedule `FetchVars` operator to fetch the latest parameter from PServer before running
45+
the forward ops.
46+
- There could be a large number of gradient variables to be sent, so we need to use another
47+
thread pool(IO Threadpool) whose a number of the schedulable threads is larger than the
48+
computing thread pool to avoid competitive the thread resources with computing.
49+
50+
### Parameter Server
51+
52+
<img src="./src/async_pserver.png" width="750"/>
53+
54+
- There should be multiple trainer instances want to optimize the same parameter at
55+
the same time, to avoid the racing, we need one `BlockingQueue` for each gradient
56+
variable to process them one by one.
57+
- We need a `Map` structure to map a gradient variable name to the `OptimizeBlock` which
58+
can optimize the respective parameter.
180 KB
Loading

0 commit comments

Comments
 (0)