Skip to content

Commit 67a3277

Browse files
committed
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into fix_crop
2 parents 6b29904 + e1b2651 commit 67a3277

File tree

16 files changed

+473
-112
lines changed

16 files changed

+473
-112
lines changed

benchmark/IntelOptimizedPaddle.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,15 @@ TBD
5353

5454
- GoogLeNet
5555

56+
| BatchSize | 64 | 128 | 256 |
57+
|--------------|-------| ------| -------|
58+
| OpenBLAS | 89.52 | 96.97 | 108.25 |
59+
| MKLML | 128.46| 137.89| 158.63 |
60+
| MKL-DNN     | 250.46| 264.83| 269.50 |
61+
62+
chart on batch size 128
63+
TBD
64+
5665
### Laptop
5766
TBD
5867
### Desktop

doc/design/reader/README.md

Lines changed: 37 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,25 @@
11
# Python Data Reader Design Doc
22

3-
At training and testing time, PaddlePaddle programs need to read data. To ease the users' work to write data reading code, we define that
3+
During the training and testing phases, PaddlePaddle programs need to read data. To help the users write code that performs reading input data, we define the following:
44

5-
- A *reader* is a function that reads data (from file, network, random number generator, etc) and yields data items.
6-
- A *reader creator* is a function that returns a reader function.
7-
- A *reader decorator* is a function, which accepts one or more readers, and returns a reader.
8-
- A *batch reader* is a function that reads data (from *reader*, file, network, random number generator, etc) and yields a batch of data items.
5+
- A *reader*: A function that reads data (from file, network, random number generator, etc) and yields the data items.
6+
- A *reader creator*: A function that returns a reader function.
7+
- A *reader decorator*: A function, which takes in one or more readers, and returns a reader.
8+
- A *batch reader*: A function that reads data (from *reader*, file, network, random number generator, etc) and yields a batch of data items.
99

10-
and provide function which converts reader to batch reader, frequently used reader creators and reader decorators.
10+
and also provide a function which can convert a reader to a batch reader, frequently used reader creators and reader decorators.
1111

1212
## Data Reader Interface
1313

14-
Indeed, *data reader* doesn't have to be a function that reads and yields data items. It can be any function with no parameter that creates a iterable (anything can be used in `for x in iterable`):
14+
*Data reader* doesn't have to be a function that reads and yields data items. It can just be any function without any parameters that creates an iterable (anything can be used in `for x in iterable`) as follows:
1515

1616
```
1717
iterable = data_reader()
1818
```
1919

20-
Element produced from the iterable should be a **single** entry of data, **not** a mini batch. That entry of data could be a single item, or a tuple of items. Item should be of [supported type](http://www.paddlepaddle.org/doc/ui/data_provider/pydataprovider2.html?highlight=dense_vector#input-types) (e.g., numpy 1d array of float32, int, list of int)
20+
The item produced from the iterable should be a **single** entry of data and **not** a mini batch. The entry of data could be a single item or a tuple of items. Item should be of one of the [supported types](http://www.paddlepaddle.org/doc/ui/data_provider/pydataprovider2.html?highlight=dense_vector#input-types) (e.g., numpy 1d array of float32, int, list of int etc.)
2121

22-
An example implementation for single item data reader creator:
22+
An example implementation for single item data reader creator is as follows:
2323

2424
```python
2525
def reader_creator_random_image(width, height):
@@ -29,7 +29,7 @@ def reader_creator_random_image(width, height):
2929
return reader
3030
```
3131

32-
An example implementation for multiple item data reader creator:
32+
An example implementation for multiple item data reader creator is as follows:
3333
```python
3434
def reader_creator_random_image_and_label(width, height, label):
3535
def reader():
@@ -40,9 +40,10 @@ def reader_creator_random_image_and_label(width, height, label):
4040

4141
## Batch Reader Interface
4242

43-
*batch reader* can be any function with no parameter that creates a iterable (anything can be used in `for x in iterable`). The output of the iterable should be a batch (list) of data items. Each item inside the list must be a tuple.
43+
*Batch reader* can be any function without any parameters that creates an iterable (anything can be used in `for x in iterable`). The output of the iterable should be a batch (list) of data items. Each item inside the list should be a tuple.
44+
45+
Here are some valid outputs:
4446

45-
Here are valid outputs:
4647
```python
4748
# a mini batch of three data items. Each data item consist three columns of data, each of which is 1.
4849
[(1, 1, 1),
@@ -58,20 +59,22 @@ Here are valid outputs:
5859
Please note that each item inside the list must be a tuple, below is an invalid output:
5960
```python
6061
# wrong, [1,1,1] needs to be inside a tuple: ([1,1,1],).
61-
# Otherwise it's ambiguous whether [1,1,1] means a single column of data [1, 1, 1],
62-
# or three column of datas, each of which is 1.
62+
# Otherwise it is ambiguous whether [1,1,1] means a single column of data [1, 1, 1],
63+
# or three columns of data, each of which is 1.
6364
[[1,1,1],
6465
[2,2,2],
6566
[3,3,3]]
6667
```
6768

68-
It's easy to convert from reader to batch reader:
69+
It is easy to convert from a reader to a batch reader:
70+
6971
```python
7072
mnist_train = paddle.dataset.mnist.train()
7173
mnist_train_batch_reader = paddle.batch(mnist_train, 128)
7274
```
7375

74-
Also easy to create custom batch reader:
76+
It is also straight forward to create a custom batch reader:
77+
7578
```python
7679
def custom_batch_reader():
7780
while True:
@@ -85,7 +88,8 @@ mnist_random_image_batch_reader = custom_batch_reader
8588

8689
## Usage
8790

88-
batch reader, mapping from item(s) read to data layer, batch size and number of total pass will be passed into `paddle.train`:
91+
Following is how we can use the reader with PaddlePaddle:
92+
The batch reader, a mapping from item(s) to data layer, the batch size and the number of total passes will be passed into `paddle.train` as follows:
8993

9094
```python
9195
# two data layer is created:
@@ -99,13 +103,13 @@ paddle.train(batch_reader, {"image":0, "label":1}, 128, 10, ...)
99103

100104
## Data Reader Decorator
101105

102-
*Data reader decorator* takes a single or multiple data reader, returns a new data reader. It is similar to a [python decorator](https://wiki.python.org/moin/PythonDecorators), but it does not use `@` syntax.
106+
The *Data reader decorator* takes in a single reader or multiple data readers and returns a new data reader. It is similar to a [python decorator](https://wiki.python.org/moin/PythonDecorators), but it does not use `@` in the syntax.
103107

104-
Since we have a strict interface for data readers (no parameter, return a single data item). Data reader can be used flexiable via data reader decorators. Following are a few examples:
108+
Since we have a strict interface for data readers (no parameters and return a single data item), a data reader can be used in a flexible way using data reader decorators. Following are a few examples:
105109

106110
### Prefetch Data
107111

108-
Since reading data may take time and training can not proceed without data. It is generally a good idea to prefetch data.
112+
Since reading data may take some time and training can not proceed without data, it is generally a good idea to prefetch the data.
109113

110114
Use `paddle.reader.buffered` to prefetch data:
111115

@@ -117,9 +121,9 @@ buffered_reader = paddle.reader.buffered(paddle.dataset.mnist.train(), 100)
117121

118122
### Compose Multiple Data Readers
119123

120-
For example, we want to use a source of real images (reusing mnist dataset), and a source of random images as input for [Generative Adversarial Networks](https://arxiv.org/abs/1406.2661).
124+
For example, if we want to use a source of real images (say reusing mnist dataset), and a source of random images as input for [Generative Adversarial Networks](https://arxiv.org/abs/1406.2661).
121125

122-
We can do:
126+
We can do the following :
123127

124128
```python
125129
def reader_creator_random_image(width, height):
@@ -139,13 +143,13 @@ false_reader = reader_creator_bool(False)
139143

140144
reader = paddle.reader.compose(paddle.dataset.mnist.train(), data_reader_creator_random_image(20, 20), true_reader, false_reader)
141145
# Skipped 1 because paddle.dataset.mnist.train() produces two items per data entry.
142-
# And we don't care second item at this time.
146+
# And we don't care about the second item at this time.
143147
paddle.train(paddle.batch(reader, 128), {"true_image":0, "fake_image": 2, "true_label": 3, "false_label": 4}, ...)
144148
```
145149

146150
### Shuffle
147151

148-
Given shuffle buffer size `n`, `paddle.reader.shuffle` will return a data reader that buffers `n` data entries and shuffle them before a data entry is read.
152+
Given the shuffle buffer size `n`, `paddle.reader.shuffle` returns a data reader that buffers `n` data entries and shuffles them before a data entry is read.
149153

150154
Example:
151155
```python
@@ -154,21 +158,21 @@ reader = paddle.reader.shuffle(paddle.dataset.mnist.train(), 512)
154158

155159
## Q & A
156160

157-
### Why reader return only a single entry, but not a mini batch?
161+
### Why does a reader return only a single entry, and not a mini batch?
158162

159-
Always returning a single entry make reusing existing data readers much easier (e.g., if existing reader return not a single entry but 3 entries, training code will be more complex because it need to handle cases like batch size 2).
163+
Returning a single entry makes reusing existing data readers much easier (for example, if an existing reader returns 3 entries instead if a single entry, the training code will be more complicated because it need to handle cases like a batch size 2).
160164

161-
We provide function `paddle.batch` to turn (single entry) reader into batch reader.
165+
We provide a function: `paddle.batch` to turn (a single entry) reader into a batch reader.
162166

163-
### Why do we need batch reader, isn't train take reader and batch_size as arguments sufficient?
167+
### Why do we need a batch reader, isn't is sufficient to give the reader and batch_size as arguments during training ?
164168

165-
In most of the case, train taking reader and batch_size as arguments would be sufficent. However sometimes user want to customize order of data entries inside a mini batch. Or even change batch size dynamically.
169+
In most of the cases, it would be sufficient to give the reader and batch_size as arguments to the train method. However sometimes the user wants to customize the order of data entries inside a mini batch, or even change the batch size dynamically. For these cases using a batch reader is very efficient and helpful.
166170

167-
### Why use a dictionary but not a list to provide mapping?
171+
### Why use a dictionary instead of a list to provide mapping?
168172

169-
We decided to use dictionary (`{"image":0, "label":1}`) instead of list (`["image", "label"]`) is because that user can easily resue item (e.g., using `{"image_a":0, "image_b":0, "label":1}`) or skip item (e.g., using `{"image_a":0, "label":2}`).
173+
Using a dictionary (`{"image":0, "label":1}`) instead of a list (`["image", "label"]`) gives the advantage that the user can easily reuse the items (e.g., using `{"image_a":0, "image_b":0, "label":1}`) or even skip an item (e.g., using `{"image_a":0, "label":2}`).
170174

171-
### How to create custom data reader creator
175+
### How to create a custom data reader creator ?
172176

173177
```python
174178
def image_reader_creator(image_path, label_path, n):
@@ -192,7 +196,7 @@ paddle.train(paddle.batch(reader, 128), {"image":0, "label":1}, ...)
192196

193197
### How is `paddle.train` implemented
194198

195-
An example implementation of paddle.train could be:
199+
An example implementation of paddle.train is:
196200

197201
```python
198202
def train(batch_reader, mapping, batch_size, total_pass):

paddle/operators/conv_op.h

Lines changed: 27 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ inline bool IsExpand(std::vector<int64_t>& filter_dim,
3838
std::vector<int>& dilations) {
3939
bool filter_1 = true, strides_1 = true, padding_0 = true, dilation_1 = true;
4040
for (size_t j = 0; j < strides.size(); ++j) {
41-
filter_1 = filter_1 && (static_cast<int>(filter_dim[j]) == 1);
41+
filter_1 = filter_1 && (static_cast<int>(filter_dim[j + 2]) == 1);
4242
strides_1 = strides_1 && (strides[j] == 1);
4343
padding_0 = padding_0 && (paddings[j] == 0);
4444
dilation_1 = dilation_1 && (dilations[j] == 1);
@@ -91,32 +91,28 @@ class GemmConvKernel : public framework::OpKernel<T> {
9191

9292
const int batch_size = static_cast<int>(input->dims()[0]);
9393

94-
// filter_shape_vec: {k_h, k_w} or {k_d, k_h, k_w}
94+
// filter_shape_vec: {k_o, k_i, k_h, k_w} or {k_o, k_i, k_d, k_h, k_w}
9595
std::vector<int64_t> filter_shape_vec(framework::vectorize(filter.dims()));
96-
filter_shape_vec.erase(filter_shape_vec.begin(),
97-
filter_shape_vec.begin() + 2);
98-
99-
// output_shape_vec: {o_h, o_w} or {o_d, o_h, o_w}
96+
// output_shape_vec: {o_n, o_c, o_h, o_w} or {o_n, o_c, o_d, o_h, o_w}
10097
std::vector<int64_t> output_shape_vec(framework::vectorize(output->dims()));
101-
output_shape_vec.erase(output_shape_vec.begin(),
102-
output_shape_vec.begin() + 2);
10398

10499
// use col_shape in the im2col calculation
105100
// col_shape_vec: {i_c/g, k_h, k_w, o_h, o_w} or {i_c/g, k_d, k_h, k_w, o_d,
106101
// o_h, o_w}
107-
std::vector<int64_t> col_shape_vec;
108-
col_shape_vec.push_back(input->dims()[1] / groups);
109-
col_shape_vec.insert(col_shape_vec.end(), filter_shape_vec.begin(),
110-
filter_shape_vec.end());
111-
col_shape_vec.insert(col_shape_vec.end(), output_shape_vec.begin(),
112-
output_shape_vec.end());
102+
size_t data_dim = filter_shape_vec.size() - 2;
103+
std::vector<int64_t> col_shape_vec(1 + 2 * data_dim);
104+
col_shape_vec[0] = input->dims()[1] / groups;
105+
for (size_t j = 0; j < data_dim; ++j) {
106+
col_shape_vec[j + 1] = filter_shape_vec[j + 2];
107+
col_shape_vec[j + 1 + data_dim] = output_shape_vec[j + 2];
108+
}
113109
framework::DDim col_shape(framework::make_ddim(col_shape_vec));
114110

115111
// use col_matrix_shape in the gemm calculation
116112
// size: (i_c/g * k_h * k_w, o_h * o_w) or (i_c/g * k_d * k_h * k_w, o_d *
117113
// o_h * o_w)
118114
framework::DDim col_matrix_shape =
119-
framework::flatten_to_2d(col_shape, filter_shape_vec.size() + 1);
115+
framework::flatten_to_2d(col_shape, data_dim + 1);
120116

121117
bool is_expand = IsExpand(filter_shape_vec, strides, paddings, dilations);
122118
Tensor col;
@@ -159,13 +155,13 @@ class GemmConvKernel : public framework::OpKernel<T> {
159155
col.ShareDataWith(in_slice);
160156
col_matrix.ShareDataWith(col);
161157
col_matrix.Resize(col_matrix_shape);
162-
} else if (filter_shape_vec.size() == 2) {
158+
} else if (data_dim == 2U) {
163159
// im2col
164160
im2col(context.device_context(), in_slice, dilations, strides,
165161
std::vector<int>{paddings[0], paddings[1], paddings[0],
166162
paddings[1]},
167163
&col);
168-
} else if (filter_shape_vec.size() == 3) {
164+
} else if (data_dim == 3U) {
169165
// vol2col
170166
vol2col(context.device_context(), in_slice, dilations, strides,
171167
paddings, &col);
@@ -206,34 +202,30 @@ class GemmConvGradKernel : public framework::OpKernel<T> {
206202

207203
const int batch_size = static_cast<int>(input->dims()[0]);
208204

209-
// filter_shape_vec: {k_h, k_w} or {k_d, k_h, k_w}
205+
// filter_shape_vec: {k_o, k_i, k_h, k_w} or {k_o, k_i, k_d, k_h, k_w}
210206
std::vector<int64_t> filter_shape_vec(framework::vectorize(filter.dims()));
211-
filter_shape_vec.erase(filter_shape_vec.begin(),
212-
filter_shape_vec.begin() + 2);
213-
214-
// output_shape_vec: {o_h, o_w} or {o_d, o_h, o_w}
207+
// output_shape_vec: {o_n, o_c, o_h, o_w} or {o_n, o_c, o_d, o_h, o_w}
215208
std::vector<int64_t> output_shape_vec(
216209
framework::vectorize(output_grad->dims()));
217-
output_shape_vec.erase(output_shape_vec.begin(),
218-
output_shape_vec.begin() + 2);
219210

220211
// use col_shape in the im2col calculation
221212
// col_shape_vec: {i_c/g, k_h, k_w, o_h, o_w} or {i_c/g, k_d, k_h, k_w, o_d,
222213
// o_h, o_w}
223-
std::vector<int64_t> col_shape_vec;
224-
col_shape_vec.push_back(input->dims()[1] / groups);
225-
col_shape_vec.insert(col_shape_vec.end(), filter_shape_vec.begin(),
226-
filter_shape_vec.end());
227-
col_shape_vec.insert(col_shape_vec.end(), output_shape_vec.begin(),
228-
output_shape_vec.end());
214+
size_t data_dim = filter_shape_vec.size() - 2;
215+
std::vector<int64_t> col_shape_vec(1 + 2 * data_dim);
216+
col_shape_vec[0] = input->dims()[1] / groups;
217+
for (size_t j = 0; j < data_dim; ++j) {
218+
col_shape_vec[j + 1] = filter_shape_vec[j + 2];
219+
col_shape_vec[j + 1 + data_dim] = output_shape_vec[j + 2];
220+
}
229221
framework::DDim col_shape(framework::make_ddim(col_shape_vec));
230222

231223
// use col_matrix_shape in the gemm calculation
232224
// size: (i_c/g * k_h * k_w, o_h * o_w)
233225
// or
234226
// (i_c/g * k_d * k_h * k_w, o_d * o_h * o_w)
235227
framework::DDim col_matrix_shape =
236-
framework::flatten_to_2d(col_shape, filter_shape_vec.size() + 1);
228+
framework::flatten_to_2d(col_shape, data_dim + 1);
237229

238230
framework::DDim input_shape = framework::slice_ddim(
239231
input->dims(), 1, static_cast<int>(input->dims().size()));
@@ -294,12 +286,12 @@ class GemmConvGradKernel : public framework::OpKernel<T> {
294286
out_grad_slice, false, T(1.0), &col_matrix,
295287
T(0.0));
296288

297-
if (is_expand && filter_shape_vec.size() == 2) {
289+
if (is_expand && data_dim == 2U) {
298290
col2im(context.device_context(), col, dilations, strides,
299291
std::vector<int>{paddings[0], paddings[1], paddings[0],
300292
paddings[1]},
301293
&in_grad_slice);
302-
} else if (is_expand && filter_shape_vec.size() == 3) {
294+
} else if (is_expand && data_dim == 3U) {
303295
col2vol(context.device_context(), col, dilations, strides, paddings,
304296
&in_grad_slice);
305297
}
@@ -328,12 +320,12 @@ class GemmConvGradKernel : public framework::OpKernel<T> {
328320
col.ShareDataWith(in_slice);
329321
col_matrix.ShareDataWith(col);
330322
col_matrix.Resize(col_matrix_shape);
331-
} else if (filter_shape_vec.size() == 2) {
323+
} else if (data_dim == 2U) {
332324
im2col(context.device_context(), in_slice, dilations, strides,
333325
std::vector<int>{paddings[0], paddings[1], paddings[0],
334326
paddings[1]},
335327
&col);
336-
} else if (filter_shape_vec.size() == 3) {
328+
} else if (data_dim == 3U) {
337329
vol2col(context.device_context(), in_slice, dilations, strides,
338330
paddings, &col);
339331
}

0 commit comments

Comments
 (0)