Skip to content

Commit 86ed4ef

Browse files
committed
Merge branch 'develop' into mklml
2 parents 329655d + 046405e commit 86ed4ef

File tree

243 files changed

+1618
-1660
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

243 files changed

+1618
-1660
lines changed

.copyright.hook

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ import subprocess
99
import platform
1010

1111
COPYRIGHT = '''
12-
Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved.
12+
Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved.
1313

1414
Licensed under the Apache License, Version 2.0 (the "License");
1515
you may not use this file except in compliance with the License.

contrib/inference/README.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# Embed Paddle Inference in Your Application
2+
3+
Paddle inference offers the APIs in `C` and `C++` languages.
4+
5+
One can easily deploy a model trained by Paddle following the steps as below:
6+
7+
1. Optimize the native model;
8+
2. Write some codes for deployment.
9+
10+
11+
Let's explain the steps in detail.
12+
13+
## Optimize the native Fluid Model
14+
15+
The native model that get from the training phase needs to be optimized for that.
16+
17+
- Clean the noise such as the cost operators that do not need inference;
18+
- Prune unnecessary computation fork that has nothing to do with the output;
19+
- Remove extraneous variables;
20+
- Memory reuse for native Fluid executor;
21+
- Translate the model storage format to some third-party engine's, so that the inference API can utilize the engine for acceleration;
22+
23+
We have an official tool to do the optimization, call `paddle_inference_optimize --help` for more information.
24+
25+
## Write some codes
26+
27+
Read `paddle_inference_api.h` for more information.
Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
/* Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License");
4+
you may not use this file except in compliance with the License.
5+
You may obtain a copy of the License at
6+
7+
http://www.apache.org/licenses/LICENSE-2.0
8+
9+
Unless required by applicable law or agreed to in writing, software
10+
distributed under the License is distributed on an "AS IS" BASIS,
11+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
See the License for the specific language governing permissions and
13+
limitations under the License. */
14+
15+
#pragma once
16+
17+
#include <string>
18+
#include <vector>
19+
20+
namespace paddle {
21+
22+
class Predictor {
23+
public:
24+
struct Attr;
25+
Predictor() = default;
26+
27+
// Build the network before inference.
28+
bool Init(const Attr& attr);
29+
30+
// Predict an record.
31+
// Arguments:
32+
// inputs: the name of the input variables.
33+
// outputs: the name of the output varaibles.
34+
// input_shapes: the shape of the input variables.
35+
// output_shapes: the shape of the output variables.
36+
// input_data: the data of the input variables.
37+
// output_data: the data of the output variables.
38+
bool Run(const std::vector<std::string>& inputs,
39+
const std::vector<std::string>& outputs,
40+
const std::vector<std::vector<int>>& input_shapes,
41+
const std::vector<std::vector<int>>& output_shapes,
42+
const std::vector<std::vector<float>>& input_data,
43+
std::vector<std::vector<float>>* output_data);
44+
45+
// Clone a predictor that share the model weights.
46+
Predictor* Clone();
47+
48+
// Destroy the Predictor.
49+
~Predictor();
50+
51+
struct Attr {
52+
enum class EngineKind;
53+
54+
std::string model_dir; // path to the model directory.
55+
bool enable_engine{false}; // Enable to execute (part of) the model on
56+
// third-party engines.
57+
EngineKind engine_kind{Attr::EngineKind::kNone};
58+
59+
enum class EngineKind {
60+
kNone = -1, // Use the native Fluid facility.
61+
kAnakin, // Use Anakin for inference.
62+
kTensorRT, // Use TensorRT for inference.
63+
kAutoMixedAnakin, // Automatically mix Fluid with Anakin.
64+
kAutoMixedTensorRT, // Automatically mix Fluid with TensorRT.
65+
};
66+
};
67+
};
68+
69+
} // namespace paddle

doc/fluid/design/motivation/api.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -77,8 +77,7 @@ print "The sematic-vector of testA: ", paddle.infer(fA, parameters, testA)
7777

7878
### Example 2. Sharing Parameters between "Models"
7979

80-
We use [GAN](https://github.com/PaddlePaddle/book/tree/develop/gan) in
81-
this example. In the following example program, `d0` and `d1`
80+
We use GAN in this example. In the following example program, `d0` and `d1`
8281
correspond to the two networks in the following figure:
8382

8483
<img src="https://github.com/wangyang59/book/raw/00036f4b0da5225041a6824587c1a01cf20159b1/gan/image/gan_ig.png" width=400 />

doc/fluid/design/multi_devices/operator_kernel_type.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ Different layout leads to different implementation of the operator kernel. There
7575
7676
- The inference of Layout is at run-time, not at compile-time.
7777
78-
- Every operator has to implement different kernels for different layouts. Let's take MKLDNN as an example. If we want to implement an MKLDNN convolution operator, we have to implement all the kernels for different layouts, which are listed [here](http://01org.github.io/mkl-dnn/structmkldnn_1_1memory.html). And we will have a special macro to register kernels for MKLDNN operators.
78+
- Every operator has to implement different kernels for different layouts. Let's take MKLDNN as an example. If we want to implement an MKLDNN convolution operator, we have to implement all the kernels for different layouts, which are listed [here](http://intel.github.io/mkl-dnn/structmkldnn_1_1memory.html). And we will have a special macro to register kernels for MKLDNN operators.
7979
8080
`Layout` is also defined as a enum variable:
8181
Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
# Distributed Training with NCCL2 and RDMA
2+
3+
When doing distributed multi-GPU training, network bandwith often becomes the
4+
bottle neck. We introduce a way to use NCCL2 to do such training job to
5+
achieve best performace.
6+
7+
## Prepare Hardwares with RDMA and Multiple GPUs
8+
9+
I'm using two Linux servers each of them is installed with 8 GPUs and
10+
one 100Gb RDMA card.
11+
Base environment is:
12+
13+
* OS: CentOS 7.4
14+
* RDMA device: "Mellanox Technologies MT27700 Family [ConnectX-4]"
15+
* Kernel version: `4.4.88-1.el7.elrepo.x86_64`
16+
* Docker version: `1.12.6`
17+
* Docker storage driver: `overlay2`
18+
* IP addresses: 192.168.16.30,192.168.16.34
19+
20+
In general, the steps including:
21+
22+
1. Install GPU drivers
23+
1. Install RDMA drivers
24+
1. Install "InfiniBand Support"
25+
1. Use docker to run tests and make sure GPUs and RDMA can work inside
26+
the container.
27+
28+
I'll ommit section "Install GPU drivers" because we can find it easily
29+
somewhere else.
30+
31+
### Install RDMA drivers
32+
33+
For my case, I've got two machines with device
34+
"Mellanox Technologies MT27700 Family [ConnectX-4]" installed. The OS was
35+
"CentOS 7.4" and I updated the kernel to version 4.4 so that docker can
36+
work with latest overlay2 filesystem.
37+
38+
***NOTE: before you start, make sure you have a way to get a console
39+
of the server other than ssh because we may need to re-configure the
40+
network device.***
41+
42+
1. Go to http://www.mellanox.com/page/products_dyn?product_family=26,
43+
download `MLNX_OFED` software in the bottom of the page, and upload it
44+
onto the server.
45+
1. Run `./mlnxofedinstall --add-kernel-support` in the software package.
46+
1. Run `/etc/init.d/openibd restart` to make everything work, note that
47+
this operation may cause the network goes down if you are using this
48+
RDMA device as default network device and use ssh to login the server.
49+
1. Re-configure the network interface, for example:
50+
`ifconfig eth2 192.168.16.30/20 up`, then add routes if needed:
51+
`ip route add default via 192.168.16.1 dev eth2`.
52+
1. Do the same thing on the other node.
53+
1. Use `ping` to test if the two nodes have typical ICMP connection.
54+
1. Use either `udaddy` or `ib_write_bw` to test the network connection is
55+
ready and have the desired bandwith.
56+
57+
### Prepare Docker Image to Run RDMA Programs
58+
59+
1. Build a docker image using cuda base image like: `nvidia/cuda:8.0-cudnn5-devel-ubuntu16.04` and install paddlepaddle whl
60+
package in it.
61+
1. Start a docker container and mount GPU driver libs into it (you can
62+
skip this step if you are using nvidia-docker).
63+
1. Mount RDMA dirvers and libs into the docker image (see below section),
64+
also `udaddy` and `ib_write_bw` if needed.
65+
1. Mount GPU devices and RDMA devices into the container using `--device`
66+
or just use privileged mode `--privileged`.
67+
1. Start the container using host network mode: `--net=host`
68+
69+
### RDMA Library Files Needed
70+
71+
Usually, `MLNX_OFED` install latest supported libs under
72+
`/usr/lib64/mlnx_ofed/valgrind`. Other libs also needed to run RDMA programs
73+
is listed below. These libs must be mounted into the docker container.
74+
75+
* Libs under `/usr/lib64/mlnx_ofed/valgrind`
76+
* libibcm.so
77+
* libibverbs.so
78+
* libmlx4.so
79+
* libmlx5.so
80+
* libmlx5-rdmav2.so
81+
* librdmacm.so
82+
* Other libs:
83+
* libnl-3.so.200
84+
* libnl-route-3.so.200
85+
* libnuma.so.1
86+
87+
## Start to Run the Training Job
88+
89+
Setting NCCL environment variables to turn NCCL switches on and off:
90+
91+
92+
| Env Name | Description |
93+
| --- | --- |
94+
| NCCL_SOCKET_IFNAME | The RDMA device, e.g. eth2 |
95+
| NCCL_P2P_DISABLE | Set to 1 to disable P2P transfer between GPUs |
96+
| NCCL_IB_DISABLE | Set to 1 to disable using RDMA |
97+
| NCCL_IB_CUDA_SUPPORT | Set to 1 to enable GPU Direct if supported |
98+
| NCCL_DEBUG | Set debug level: VERSION, WARN, INFO |
99+
100+
My two servers are: `192.168.16.30,192.168.16.34`, On node 1, Run :
101+
102+
```bash
103+
PADDLE_TRAINER_ID=0 PADDLE_PORT=48372 PADDLE_WORKERS=192.168.16.30,192.168.16.34 POD_IP=192.168.16.30 stdbuf -oL python vgg16.py
104+
```
105+
106+
On node 2, Run:
107+
108+
```bash
109+
PADDLE_TRAINER_ID=1 PADDLE_PORT=48372 PADDLE_WORKERS=192.168.16.30,192.168.16.34 POD_IP=192.168.16.34 stdbuf -oL python vgg16.py
110+
```

paddle/fluid/framework/CMakeLists.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ cc_library(data_transform SRCS data_transform.cc DEPS math_function tensor
5757
cc_library(attribute SRCS attribute.cc DEPS framework_proto boost)
5858
cc_test(program_desc_test SRCS program_desc_test.cc DEPS proto_desc
5959
device_context)
60-
cc_library(op_proto_maker SRCS op_proto_maker.cc DEPS framework_proto attribute)
60+
cc_library(op_proto_maker SRCS op_proto_maker.cc DEPS framework_proto attribute glog)
6161
cc_test(op_proto_maker_test SRCS op_proto_maker_test.cc DEPS op_proto_maker)
6262
cc_library(op_info SRCS op_info.cc DEPS attribute framework_proto)
6363
cc_library(shape_inference SRCS shape_inference.cc DEPS ddim attribute device_context)

paddle/fluid/framework/block_desc.cc

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -134,6 +134,11 @@ OpDesc *BlockDesc::PrependOp() {
134134
return ops_.front().get();
135135
}
136136

137+
void BlockDesc::PrependAllocatedOp(std::unique_ptr<OpDesc> &&op_desc) {
138+
need_update_ = true;
139+
ops_.emplace_front(std::move(op_desc));
140+
}
141+
137142
OpDesc *BlockDesc::InsertOp(size_t index) {
138143
need_update_ = true;
139144
auto it = ops_.begin() + index;

paddle/fluid/framework/block_desc.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -88,6 +88,8 @@ class BlockDesc {
8888

8989
OpDesc *PrependOp();
9090

91+
void PrependAllocatedOp(std::unique_ptr<OpDesc> &&op_desc);
92+
9193
OpDesc *InsertOp(size_t index);
9294

9395
/*

paddle/fluid/framework/data_device_transform_test.cu

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,8 +32,7 @@ struct AddFunctor {
3232

3333
class OpKernelTestProtoAndCheckerMaker : public OpProtoAndCheckerMaker {
3434
public:
35-
OpKernelTestProtoAndCheckerMaker(OpProto* proto, OpAttrChecker* op_checker)
36-
: OpProtoAndCheckerMaker(proto, op_checker) {
35+
void Make() {
3736
AddInput("input", "input1 of test op");
3837
AddOutput("output", "output of test op");
3938
AddAttr<bool>("use_gpu", "force to use gpu kernel").SetDefault(false);

0 commit comments

Comments
 (0)