Skip to content

Commit 9fda5c9

Browse files
committed
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into add_FLAGS_use_deterministic_algo
2 parents c5774e3 + 494554f commit 9fda5c9

File tree

130 files changed

+2975
-698
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

130 files changed

+2975
-698
lines changed

benchmark/fluid/machine_translation.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@
2121
import time
2222
import distutils.util
2323

24-
import paddle.v2 as paddle
24+
import paddle
2525
import paddle.fluid as fluid
2626
import paddle.fluid.core as core
2727
import paddle.fluid.framework as framework

benchmark/fluid/mnist.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020
import argparse
2121
import time
2222

23-
import paddle.v2 as paddle
23+
import paddle
2424
import paddle.fluid as fluid
2525
import paddle.fluid.profiler as profiler
2626

benchmark/fluid/resnet.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323

2424
import cProfile, pstats, StringIO
2525

26-
import paddle.v2 as paddle
26+
import paddle
2727
import paddle.fluid as fluid
2828
import paddle.fluid.core as core
2929
import paddle.fluid.profiler as profiler

benchmark/fluid/stacked_dynamic_lstm.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,10 +23,10 @@
2323
import time
2424

2525
import numpy
26-
import paddle.v2 as paddle
27-
import paddle.v2.dataset.imdb as imdb
26+
import paddle
27+
import paddle.dataset.imdb as imdb
2828
import paddle.fluid as fluid
29-
from paddle.v2 import batch
29+
import paddle.batch as batch
3030
import paddle.fluid.profiler as profiler
3131

3232

benchmark/fluid/vgg.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
import sys
1818
import time
1919
import numpy as np
20-
import paddle.v2 as paddle
20+
import paddle
2121
import paddle.fluid as fluid
2222
import paddle.fluid.core as core
2323
import argparse

doc/fluid/api/data/data_reader.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,11 +56,11 @@ DataFeeder
5656
Reader
5757
======
5858

59-
.. automodule:: paddle.v2.reader
59+
.. automodule:: paddle.reader
6060
:members:
6161
:noindex:
6262

63-
.. automodule:: paddle.v2.reader.creator
63+
.. automodule:: paddle.reader.creator
6464
:members:
6565
:noindex:
6666

doc/fluid/api/layers.rst

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -479,6 +479,13 @@ label_smooth
479479
.. autofunction:: paddle.fluid.layers.label_smooth
480480
:noindex:
481481

482+
roi_pool
483+
---------
484+
485+
.. autofunction:: paddle.fluid.layers.roi_pool
486+
:noindex:
487+
488+
482489
ops
483490
===
484491

@@ -820,3 +827,5 @@ topk
820827

821828
.. autofunction:: paddle.fluid.layers.topk
822829
:noindex:
830+
831+

doc/fluid/design/data_type/float16.md

Lines changed: 85 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
## Why float16
44
Half precision (float16) is a binary floating-point format that occupies 16 bits in memory. float16 is half the size of traditional 32-bit single precision format (float) and has lower precision and smaller range.
55

6-
When high precision computation is not required, using float16 data type could potentially
6+
When high precision computation is not required (which is usually the case at least in the deep learning inference stage), using float16 data type could potentially
77

88
- reduce storage space, memory bandwidth, and power usages;
99
- increase the chance of data fitting into a smaller cache of lower latency;
@@ -12,7 +12,7 @@ When high precision computation is not required, using float16 data type could p
1212
## Survey of current float16 support
1313
A brief survey of float16 support on different compilers, hardwares, and libraries can be found below. Interested readers can refer to [link1](https://github.com/PaddlePaddle/Paddle/issues/4853) and [link2](https://github.com/Xreki/Xreki.github.io/blob/master/multi_data_types_in_dl_framework/ppt/float16_and_quantized_type.md) for more info.
1414

15-
The goal of float16 is to serve as a key for the executor to find and run the correct version of compute method specialized for float16 in operator kernel. It should be compatible with various natively supported float16 implementations including `__half` for cuda, `float16_t` for ARM, and `Eigen::half` for Eigen to make writing customized float16 kernels easier.
15+
The goal of float16 is to serve as a key for the executor to find and run the correct version of compute method specialized for float16 in operator kernels. It should be compatible with various natively supported float16 implementations including `__half` for cuda, `float16_t` for ARM, and `Eigen::half` for Eigen to make writing customized float16 kernels easier.
1616

1717
### Compiler
1818
- nvcc supports `__half` data type after CUDA 7.5.
@@ -95,11 +95,89 @@ float half_to_float(float16 h);
9595
```
9696
which provides one-to-one conversion between float32 and float16. These twos functions will do different conversion routines based on the current hardware. CUDA/ARM instrinsics will be used when the corresonding hardware is available. If the hardware or compiler level does not support float32 to float16 conversion, software emulation will be performed to do the conversion.
9797

98-
## To do
99-
After float16 class is available, some of the future items are below:
98+
## float16 inference
99+
In Fluid, a neural network is represented as a protobuf message called [ProgramDesc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/concepts/program.md), whose Python wrapper is a [Program](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/modules/python_api.md#program). The basic structure of a program is some nested [blocks](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/modules/python_api.md#block), where each block consists of some [variable](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/modules/python_api.md#variable) definitions and a sequence of [operators](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/modules/python_api.md#operator). An [executor](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/concepts/executor.md) will run a given program desc by executing the sequence of operators in the entrance block of the program one by one.
100100

101-
- Update pybind/tensor_py.h to bind c++ float16 with numpy float16.
101+
### Operator level requirement
102+
Each operator has many kernels for different data types, devices, and library types. The operator will select the appropriate kernel to run based on, among other things, the data type of the input variables. By default, every Fluid operator has a float data type kernel that takes float variables as input and generates float output.
102103

103-
- Modify `GetKernelType()` method in `framework/operator.h` to make it compatible with float16.
104+
This means that if we provide float input to the first operator in a program, then each opeartor will use float kernel to compute float output and send it as input to the next operator to trigger the float kernel. Overall, the program will run in float mode and give us a final output of float data type.
104105

105-
- Create a type-casting operator that can convert the data type in tensor between float16 and other types.
106+
The same principle applies if we want a program to run in float16 mode. We provide input variable of float16 data type to the first operator, and then one by one, each operator in the program will run the float16 kernel (provided that each operator in this program has float16 kernels registered) until we finally obtain a float16 output variable.
107+
108+
So the preliminary requirement for float16 inference is to add float16 kernel to operators that are needed in a specific kind of program. For example, float16 inference on an image classification neural network like Vgg or Resnet, typically requires the following operators to have float16 kernels: convolution, pooling, multiplication, addition, batch norm, dropout, relu, and softmax. Please refer to [new_op_en](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/dev/new_op_en.md) for details of how to add new kernels to an operator.
109+
110+
### Variable level requirement
111+
Operators including convolution and multiplication (used in fully-connected layers) takes as input not only the variables generated by the preceding operators but also [parameter](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/modules/python_api.md#parameter) variables, which contains the trained weights to apply to the input data. These weights are obtained in the Fluid training process and are by default of float data type.
112+
113+
When these operators are running in float16 mode, the float16 kernel requires those parameter variables to contain weights of Fluid float16 data type. Thus, we need a convenient way to convert the original float weights to float16 weights.
114+
115+
In Fluid, we use tensor to hold actual data for a variable on the c++ end. [Pybind](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/pybind/tensor_py.h) is used to bind c++ tensors of certain data type with numpy array of the correponding numpy data type on the Python end. Each common c++ built-in data type has a corresponding numpy data type of the same name. However, since there is no built-in float16 type in c++, we cannot directly bind numpy float16 data type with the Fluid float16 class. Since both Fluid float16 and numpy float16 use uint16 as the internal data storage type, we use c++ built-in type `uint16_t` and the corresponding numpy uint16 data type to bridge the gap via [Pybind](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/pybind/tensor_py.h).
116+
117+
The following code demonstrates how to do the tensor conversion.
118+
```Python
119+
# var is the variable of float weights
120+
# tensor is a numpy array of data copied from the tensor data in var
121+
# fp16_var is the variable that will contain float16 weights converted from var
122+
tensor = numpy.array(var.get_tensor())
123+
fp16_tensor = fp16_var.get_tensor()
124+
125+
# After the original tensor data is converted to numpy float16 data type,
126+
# view(numpy.uint16) is used so that the internal memory of the numpy array
127+
# will be reinterpreted to be of uint16 data type, which is binded to
128+
# Fluid float16 class via pybind with the help of uint16_t built-in c++ type
129+
fp16_tensor.set(tensor.astype(numpy.float16).view(numpy.uint16), GPUPlace)
130+
```
131+
132+
### Consistent API requirement
133+
The basic inference in float16 mode requires users to feed input and obtain output both of float16 data type. However, in this way, the inference APIs are not consistent between float16 mode and float mode, and users may find it confusing and diffcult to use float16 inference since they need to do extra steps to provide float16 input data and convert float16 output data back to float. To have consistent API for different inference modes, we need to transpile the program desc in some way so that we can run float16 inference by feeding and fetching variables of float data type.
134+
135+
This problem can be solved by introducing a type-casting operator which takes an input variable of certain data type, cast it to another specified data type, and put the casted data into the output variable. Insert cast operator where needed can make a program internally run in float16 mode.
136+
137+
### float16 transpiler
138+
Put all the above requirements in mind, we designed a float16 inference transpiler that can tranpile a float32 mode inference program desc to a float16 mode one.
139+
140+
Given a float inference program and the corresponding variables of float32 weights in the [scope](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/concepts/scope.md),
141+
this transpiler mainly does the following modifications:
142+
143+
1. Insert cast operators at the beginning of the program so that the input float data will be converted to float16 data type before feeding to subsequent operators to invoke the float16 kernel.
144+
145+
2. Insert cast operators at the end of the program so that the output float16 data will be converted back to float data type before users obtain the result.
146+
147+
3. For each parameter variable of float weights, create in the scope a corresponding variable of float16 weights which are converted from the corresponding float weights and add this new float16 variable to the program.
148+
149+
4. Update the operator information in the program so that each relevant operator use the newly created float16 variable instead of its float counterpart.
150+
151+
Below is an example of usage:
152+
```Python
153+
# Get the float inference program
154+
[float_inference_program, feed_target_names,
155+
fetch_targets] = fluid.io.load_inference_model(save_dirname, exe)
156+
157+
# Prepare the float input data
158+
tensor_img = numpy.random.rand(1, 3, 32, 32).astype(numpy.float32)
159+
160+
# Running inference_program in float mode
161+
float_results = exe.run(float_inference_program,
162+
feed={feed_target_names[0]: tensor_img},
163+
fetch_list=fetch_targets)
164+
165+
# Use float16 transpiler to speedup
166+
float16_inference_program = float_inference_program.clone()
167+
t = fluid.InferenceTranspiler()
168+
t.float16_transpile(float16_inference_program, GPUPlace)
169+
170+
# Running
171+
float16_results = exe.run(float16_inference_program,
172+
feed={feed_target_names[0]: tensor_img},
173+
fetch_list=fetch_targets)
174+
```
175+
176+
As we can see from the example above, users can simply use the `float16_transpile` method provided by the infernece transpiler class on an existing float inference program to run inference in float16 mode.
177+
178+
### Speedup on GPU
179+
Currently, Fluid inference in float16 mode is only supported on Nvidia GPU device. There is no motivation to support float16 inference on non-ARM CPUs because float16 is not natively supported there and float16 calculation will only be slower than its float counterpart.
180+
181+
Nvidia started to support its native float16 data type (which has the same internal memory representation as Fluid float16 class) on CUDA 7.5. Moreover, float16 speedups on common computational intensive tasks including GEMM (general matrix-matrix multiplication) and convolution are supported since cublas 7.5 and cuDNN 5.0.
182+
183+
Recently, the introduction of [tensor core](https://devblogs.nvidia.com/programming-tensor-cores-cuda-9/) in volta architecture GPUs and the support of tensor core calculation in CUDA 9.0 and cuDNN 7.0 make float16 truly superior to float in certain deep learning applications. Please refer to this [benchmark report](https://github.com/kexinzhao/Paddle_benchmark/blob/master/float16_benchmark.md) for more details.

doc/v2/api/data/data_reader.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,11 +56,11 @@ DataFeeder
5656
Reader
5757
======
5858

59-
.. automodule:: paddle.v2.reader
59+
.. automodule:: paddle.reader
6060
:members:
6161
:noindex:
6262

63-
.. automodule:: paddle.v2.reader.creator
63+
.. automodule:: paddle.reader.creator
6464
:members:
6565
:noindex:
6666

doc/v2/api/data/dataset.rst

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,82 +1,82 @@
11
Dataset
22
=======
33

4-
.. automodule:: paddle.v2.dataset
4+
.. automodule:: paddle.dataset
55
:members:
66
:noindex:
77

88
mnist
99
+++++
1010

11-
.. automodule:: paddle.v2.dataset.mnist
11+
.. automodule:: paddle.dataset.mnist
1212
:members:
1313
:noindex:
1414

1515
cifar
1616
+++++
1717

18-
.. automodule:: paddle.v2.dataset.cifar
18+
.. automodule:: paddle.dataset.cifar
1919
:members:
2020
:noindex:
2121

2222
conll05
2323
+++++++
2424

25-
.. automodule:: paddle.v2.dataset.conll05
25+
.. automodule:: paddle.dataset.conll05
2626
:members: get_dict,get_embedding,test
2727
:noindex:
2828

2929
imdb
3030
++++
3131

32-
.. automodule:: paddle.v2.dataset.imdb
32+
.. automodule:: paddle.dataset.imdb
3333
:members:
3434
:noindex:
3535

3636
imikolov
3737
++++++++
3838

39-
.. automodule:: paddle.v2.dataset.imikolov
39+
.. automodule:: paddle.dataset.imikolov
4040
:members:
4141
:noindex:
4242

4343
movielens
4444
+++++++++
4545

46-
.. automodule:: paddle.v2.dataset.movielens
46+
.. automodule:: paddle.dataset.movielens
4747
:members:
4848
:noindex:
4949

50-
.. autoclass:: paddle.v2.dataset.movielens.MovieInfo
50+
.. autoclass:: paddle.dataset.movielens.MovieInfo
5151
:noindex:
52-
53-
.. autoclass:: paddle.v2.dataset.movielens.UserInfo
52+
53+
.. autoclass:: paddle.dataset.movielens.UserInfo
5454
:noindex:
5555

5656
sentiment
5757
+++++++++
5858

59-
.. automodule:: paddle.v2.dataset.sentiment
59+
.. automodule:: paddle.dataset.sentiment
6060
:members:
6161
:noindex:
6262

6363
uci_housing
6464
+++++++++++
6565

66-
.. automodule:: paddle.v2.dataset.uci_housing
66+
.. automodule:: paddle.dataset.uci_housing
6767
:members:
6868
:noindex:
6969

7070
wmt14
7171
+++++
7272

73-
.. automodule:: paddle.v2.dataset.wmt14
73+
.. automodule:: paddle.dataset.wmt14
7474
:members:
7575
:noindex:
7676

7777
wmt16
7878
+++++
7979

80-
.. automodule:: paddle.v2.dataset.wmt16
80+
.. automodule:: paddle.dataset.wmt16
8181
:members:
8282
:noindex:

0 commit comments

Comments
 (0)