Skip to content

Commit 89e63b1

Browse files
committed
Merge remote-tracking branch 'upstream/develop' into factorization_machine_layer
2 parents b80cdce + d883547 commit 89e63b1

File tree

83 files changed

+2102
-7566
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

83 files changed

+2102
-7566
lines changed

benchmark/IntelOptimizedPaddle.md

Lines changed: 28 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -12,11 +12,11 @@ Machine:
1212

1313
System: CentOS release 6.3 (Final), Docker 1.12.1.
1414

15-
PaddlePaddle: paddlepaddle/paddle:latest (TODO: will rerun after 0.11.0)
16-
17-
- MKL-DNN tag v0.10
18-
- MKLML 2018.0.20170720
15+
PaddlePaddle: paddlepaddle/paddle:latest (for MKLML and MKL-DNN), paddlepaddle/paddle:latest-openblas (for OpenBLAS)
16+
- MKL-DNN tag v0.11
17+
- MKLML 2018.0.1.20171007
1918
- OpenBLAS v0.2.20
19+
(TODO: will rerun after 0.11.0)
2020

2121
On each machine, we will test and compare the performance of training on single node using MKL-DNN / MKLML / OpenBLAS respectively.
2222

@@ -31,17 +31,37 @@ Input image size - 3 * 224 * 224, Time: images/second
3131

3232
| BatchSize | 64 | 128 | 256 |
3333
|--------------|-------| -----| --------|
34-
| OpenBLAS | 7.82 | 8.62 | 10.34 |
35-
| MKLML | 11.02 | 12.86 | 15.33 |
36-
| MKL-DNN | 27.69 | 28.8 | 29.27 |
34+
| OpenBLAS | 7.80 | 9.00 | 10.80 |
35+
| MKLML | 12.12 | 13.70 | 16.18 |
36+
| MKL-DNN | 28.46 | 29.83 | 30.44 |
37+
38+
39+
chart on batch size 128
40+
TBD
41+
42+
- ResNet-50
43+
44+
| BatchSize | 64 | 128 | 256 |
45+
|--------------|-------| ------| -------|
46+
| OpenBLAS | 25.22 | 25.68 | 27.12 |
47+
| MKLML | 32.52 | 31.89 | 33.12 |
48+
| MKL-DNN | 81.69 | 82.35 | 84.08 |
3749

3850

3951
chart on batch size 128
4052
TBD
4153

42-
- ResNet
4354
- GoogLeNet
4455

56+
| BatchSize | 64 | 128 | 256 |
57+
|--------------|-------| ------| -------|
58+
| OpenBLAS | 89.52 | 96.97 | 108.25 |
59+
| MKLML | 128.46| 137.89| 158.63 |
60+
| MKL-DNN     | 250.46| 264.83| 269.50 |
61+
62+
chart on batch size 128
63+
TBD
64+
4565
### Laptop
4666
TBD
4767
### Desktop

doc/design/reader/README.md

Lines changed: 37 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,25 @@
11
# Python Data Reader Design Doc
22

3-
At training and testing time, PaddlePaddle programs need to read data. To ease the users' work to write data reading code, we define that
3+
During the training and testing phases, PaddlePaddle programs need to read data. To help the users write code that performs reading input data, we define the following:
44

5-
- A *reader* is a function that reads data (from file, network, random number generator, etc) and yields data items.
6-
- A *reader creator* is a function that returns a reader function.
7-
- A *reader decorator* is a function, which accepts one or more readers, and returns a reader.
8-
- A *batch reader* is a function that reads data (from *reader*, file, network, random number generator, etc) and yields a batch of data items.
5+
- A *reader*: A function that reads data (from file, network, random number generator, etc) and yields the data items.
6+
- A *reader creator*: A function that returns a reader function.
7+
- A *reader decorator*: A function, which takes in one or more readers, and returns a reader.
8+
- A *batch reader*: A function that reads data (from *reader*, file, network, random number generator, etc) and yields a batch of data items.
99

10-
and provide function which converts reader to batch reader, frequently used reader creators and reader decorators.
10+
and also provide a function which can convert a reader to a batch reader, frequently used reader creators and reader decorators.
1111

1212
## Data Reader Interface
1313

14-
Indeed, *data reader* doesn't have to be a function that reads and yields data items. It can be any function with no parameter that creates a iterable (anything can be used in `for x in iterable`):
14+
*Data reader* doesn't have to be a function that reads and yields data items. It can just be any function without any parameters that creates an iterable (anything can be used in `for x in iterable`) as follows:
1515

1616
```
1717
iterable = data_reader()
1818
```
1919

20-
Element produced from the iterable should be a **single** entry of data, **not** a mini batch. That entry of data could be a single item, or a tuple of items. Item should be of [supported type](http://www.paddlepaddle.org/doc/ui/data_provider/pydataprovider2.html?highlight=dense_vector#input-types) (e.g., numpy 1d array of float32, int, list of int)
20+
The item produced from the iterable should be a **single** entry of data and **not** a mini batch. The entry of data could be a single item or a tuple of items. Item should be of one of the [supported types](http://www.paddlepaddle.org/doc/ui/data_provider/pydataprovider2.html?highlight=dense_vector#input-types) (e.g., numpy 1d array of float32, int, list of int etc.)
2121

22-
An example implementation for single item data reader creator:
22+
An example implementation for single item data reader creator is as follows:
2323

2424
```python
2525
def reader_creator_random_image(width, height):
@@ -29,7 +29,7 @@ def reader_creator_random_image(width, height):
2929
return reader
3030
```
3131

32-
An example implementation for multiple item data reader creator:
32+
An example implementation for multiple item data reader creator is as follows:
3333
```python
3434
def reader_creator_random_image_and_label(width, height, label):
3535
def reader():
@@ -40,9 +40,10 @@ def reader_creator_random_image_and_label(width, height, label):
4040

4141
## Batch Reader Interface
4242

43-
*batch reader* can be any function with no parameter that creates a iterable (anything can be used in `for x in iterable`). The output of the iterable should be a batch (list) of data items. Each item inside the list must be a tuple.
43+
*Batch reader* can be any function without any parameters that creates an iterable (anything can be used in `for x in iterable`). The output of the iterable should be a batch (list) of data items. Each item inside the list should be a tuple.
44+
45+
Here are some valid outputs:
4446

45-
Here are valid outputs:
4647
```python
4748
# a mini batch of three data items. Each data item consist three columns of data, each of which is 1.
4849
[(1, 1, 1),
@@ -58,20 +59,22 @@ Here are valid outputs:
5859
Please note that each item inside the list must be a tuple, below is an invalid output:
5960
```python
6061
# wrong, [1,1,1] needs to be inside a tuple: ([1,1,1],).
61-
# Otherwise it's ambiguous whether [1,1,1] means a single column of data [1, 1, 1],
62-
# or three column of datas, each of which is 1.
62+
# Otherwise it is ambiguous whether [1,1,1] means a single column of data [1, 1, 1],
63+
# or three columns of data, each of which is 1.
6364
[[1,1,1],
6465
[2,2,2],
6566
[3,3,3]]
6667
```
6768

68-
It's easy to convert from reader to batch reader:
69+
It is easy to convert from a reader to a batch reader:
70+
6971
```python
7072
mnist_train = paddle.dataset.mnist.train()
7173
mnist_train_batch_reader = paddle.batch(mnist_train, 128)
7274
```
7375

74-
Also easy to create custom batch reader:
76+
It is also straight forward to create a custom batch reader:
77+
7578
```python
7679
def custom_batch_reader():
7780
while True:
@@ -85,7 +88,8 @@ mnist_random_image_batch_reader = custom_batch_reader
8588

8689
## Usage
8790

88-
batch reader, mapping from item(s) read to data layer, batch size and number of total pass will be passed into `paddle.train`:
91+
Following is how we can use the reader with PaddlePaddle:
92+
The batch reader, a mapping from item(s) to data layer, the batch size and the number of total passes will be passed into `paddle.train` as follows:
8993

9094
```python
9195
# two data layer is created:
@@ -99,13 +103,13 @@ paddle.train(batch_reader, {"image":0, "label":1}, 128, 10, ...)
99103

100104
## Data Reader Decorator
101105

102-
*Data reader decorator* takes a single or multiple data reader, returns a new data reader. It is similar to a [python decorator](https://wiki.python.org/moin/PythonDecorators), but it does not use `@` syntax.
106+
The *Data reader decorator* takes in a single reader or multiple data readers and returns a new data reader. It is similar to a [python decorator](https://wiki.python.org/moin/PythonDecorators), but it does not use `@` in the syntax.
103107

104-
Since we have a strict interface for data readers (no parameter, return a single data item). Data reader can be used flexiable via data reader decorators. Following are a few examples:
108+
Since we have a strict interface for data readers (no parameters and return a single data item), a data reader can be used in a flexible way using data reader decorators. Following are a few examples:
105109

106110
### Prefetch Data
107111

108-
Since reading data may take time and training can not proceed without data. It is generally a good idea to prefetch data.
112+
Since reading data may take some time and training can not proceed without data, it is generally a good idea to prefetch the data.
109113

110114
Use `paddle.reader.buffered` to prefetch data:
111115

@@ -117,9 +121,9 @@ buffered_reader = paddle.reader.buffered(paddle.dataset.mnist.train(), 100)
117121

118122
### Compose Multiple Data Readers
119123

120-
For example, we want to use a source of real images (reusing mnist dataset), and a source of random images as input for [Generative Adversarial Networks](https://arxiv.org/abs/1406.2661).
124+
For example, if we want to use a source of real images (say reusing mnist dataset), and a source of random images as input for [Generative Adversarial Networks](https://arxiv.org/abs/1406.2661).
121125

122-
We can do:
126+
We can do the following :
123127

124128
```python
125129
def reader_creator_random_image(width, height):
@@ -139,13 +143,13 @@ false_reader = reader_creator_bool(False)
139143

140144
reader = paddle.reader.compose(paddle.dataset.mnist.train(), data_reader_creator_random_image(20, 20), true_reader, false_reader)
141145
# Skipped 1 because paddle.dataset.mnist.train() produces two items per data entry.
142-
# And we don't care second item at this time.
146+
# And we don't care about the second item at this time.
143147
paddle.train(paddle.batch(reader, 128), {"true_image":0, "fake_image": 2, "true_label": 3, "false_label": 4}, ...)
144148
```
145149

146150
### Shuffle
147151

148-
Given shuffle buffer size `n`, `paddle.reader.shuffle` will return a data reader that buffers `n` data entries and shuffle them before a data entry is read.
152+
Given the shuffle buffer size `n`, `paddle.reader.shuffle` returns a data reader that buffers `n` data entries and shuffles them before a data entry is read.
149153

150154
Example:
151155
```python
@@ -154,21 +158,21 @@ reader = paddle.reader.shuffle(paddle.dataset.mnist.train(), 512)
154158

155159
## Q & A
156160

157-
### Why reader return only a single entry, but not a mini batch?
161+
### Why does a reader return only a single entry, and not a mini batch?
158162

159-
Always returning a single entry make reusing existing data readers much easier (e.g., if existing reader return not a single entry but 3 entries, training code will be more complex because it need to handle cases like batch size 2).
163+
Returning a single entry makes reusing existing data readers much easier (for example, if an existing reader returns 3 entries instead if a single entry, the training code will be more complicated because it need to handle cases like a batch size 2).
160164

161-
We provide function `paddle.batch` to turn (single entry) reader into batch reader.
165+
We provide a function: `paddle.batch` to turn (a single entry) reader into a batch reader.
162166

163-
### Why do we need batch reader, isn't train take reader and batch_size as arguments sufficient?
167+
### Why do we need a batch reader, isn't is sufficient to give the reader and batch_size as arguments during training ?
164168

165-
In most of the case, train taking reader and batch_size as arguments would be sufficent. However sometimes user want to customize order of data entries inside a mini batch. Or even change batch size dynamically.
169+
In most of the cases, it would be sufficient to give the reader and batch_size as arguments to the train method. However sometimes the user wants to customize the order of data entries inside a mini batch, or even change the batch size dynamically. For these cases using a batch reader is very efficient and helpful.
166170

167-
### Why use a dictionary but not a list to provide mapping?
171+
### Why use a dictionary instead of a list to provide mapping?
168172

169-
We decided to use dictionary (`{"image":0, "label":1}`) instead of list (`["image", "label"]`) is because that user can easily resue item (e.g., using `{"image_a":0, "image_b":0, "label":1}`) or skip item (e.g., using `{"image_a":0, "label":2}`).
173+
Using a dictionary (`{"image":0, "label":1}`) instead of a list (`["image", "label"]`) gives the advantage that the user can easily reuse the items (e.g., using `{"image_a":0, "image_b":0, "label":1}`) or even skip an item (e.g., using `{"image_a":0, "label":2}`).
170174

171-
### How to create custom data reader creator
175+
### How to create a custom data reader creator ?
172176

173177
```python
174178
def image_reader_creator(image_path, label_path, n):
@@ -192,7 +196,7 @@ paddle.train(paddle.batch(reader, 128), {"image":0, "label":1}, ...)
192196

193197
### How is `paddle.train` implemented
194198

195-
An example implementation of paddle.train could be:
199+
An example implementation of paddle.train is:
196200

197201
```python
198202
def train(batch_reader, mapping, batch_size, total_pass):

paddle/capi/examples/model_inference/dense/main.c

Lines changed: 17 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
#include <paddle/capi.h>
22
#include <time.h>
3+
34
#include "../common/common.h"
45

56
#define CONFIG_BIN "./trainer_config.bin"
@@ -27,20 +28,19 @@ int main() {
2728
CHECK(paddle_arguments_resize(in_args, 1));
2829

2930
// Create input matrix.
30-
paddle_matrix mat = paddle_matrix_create(/* sample_num */ 10,
31+
paddle_matrix mat = paddle_matrix_create(/* sample_num */ 1,
3132
/* size */ 784,
3233
/* useGPU */ false);
3334
srand(time(0));
3435

35-
std::vector<paddle_real> input;
36-
input.resize(784 * 10);
36+
paddle_real* array;
37+
38+
// Get First row.
39+
CHECK(paddle_matrix_get_row(mat, 0, &array));
3740

38-
for (int i = 0; i < input.size(); ++i) {
39-
input[i] = rand() / ((float)RAND_MAX);
41+
for (int i = 0; i < 784; ++i) {
42+
array[i] = rand() / ((float)RAND_MAX);
4043
}
41-
42-
// Set value for the input matrix
43-
CHECK(paddle_matrix_set_value(mat, input.data()));
4444

4545
CHECK(paddle_arguments_set_value(in_args, 0, mat));
4646

@@ -53,17 +53,18 @@ int main() {
5353

5454
CHECK(paddle_arguments_get_value(out_args, 0, prob));
5555

56-
std::std::vector<paddle_real> result;
57-
int height;
58-
int width;
56+
uint64_t height;
57+
uint64_t width;
5958

60-
CHECK(paddle_matrix_get_shape(prob, &height, &width);
61-
result.resize(height * width);
62-
CHECK(paddle_matrix_get_value(prob, result.data()));
59+
CHECK(paddle_matrix_get_shape(prob, &height, &width));
60+
CHECK(paddle_matrix_get_row(prob, 0, &array));
6361

64-
printf("Prob: ");
62+
printf("Prob: \n");
6563
for (int i = 0; i < height * width; ++i) {
66-
printf("%.2f ", result[i]);
64+
printf("%.4f ", array[i]);
65+
if ((i + 1) % width == 0) {
66+
printf("\n");
67+
}
6768
}
6869
printf("\n");
6970

paddle/gserver/layers/BatchNormBaseLayer.cpp

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,7 @@ bool BatchNormBaseLayer::init(const LayerMap& layerMap,
4141
useGlobalStats_ = config_.use_global_stats();
4242
}
4343
movingAvgFraction_ = config_.moving_average_fraction();
44+
epsilon_ = config_.epsilon();
4445

4546
weight_.reset(new Weight(1, channels_, parameters_[0]));
4647
movingMean_.reset(new Weight(1, channels_, parameters_[1]));

paddle/gserver/layers/BatchNormBaseLayer.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -94,6 +94,8 @@ class BatchNormBaseLayer : public Layer {
9494
bool useGlobalStats_;
9595
// use to compute moving mean and variance.
9696
real movingAvgFraction_;
97+
// Epsilon is a small random noise used in batch normalization for stability.
98+
real epsilon_;
9799
};
98100

99101
} // namespace paddle

paddle/gserver/layers/BatchNormalizationLayer.cpp

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,6 @@ namespace paddle {
2222

2323
REGISTER_LAYER(batch_norm, BatchNormalizationLayer);
2424

25-
const real BatchNormalizationLayer::EPS = 1E-5;
26-
2725
bool BatchNormalizationLayer::init(const LayerMap& layerMap,
2826
const ParameterMap& parameterMap) {
2927
/* Initialize the basic parent class */
@@ -53,7 +51,7 @@ void BatchNormalizationLayer::calMeanAndStd(const MatrixPtr& mat) {
5351

5452
calMovingMeanAndVar();
5553

56-
savedInvVar_->subScalar(-EPS);
54+
savedInvVar_->subScalar(-epsilon_);
5755
savedInvVar_->sqrt2(*savedInvVar_);
5856
}
5957

@@ -74,7 +72,7 @@ void BatchNormalizationLayer::setMeanAndStd() {
7472
savedInvVar_->copyFrom(*(movingVar_->getW()));
7573
savedInvVar_->downClip(real(0.0));
7674

77-
savedInvVar_->subScalar(-EPS);
75+
savedInvVar_->subScalar(-epsilon_);
7876
savedInvVar_->sqrt2(*savedInvVar_);
7977
}
8078

paddle/gserver/layers/BatchNormalizationLayer.h

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -39,9 +39,6 @@ class BatchNormalizationLayer : public BatchNormBaseLayer {
3939
void backward(const UpdateCallback& callback = nullptr) override;
4040

4141
protected:
42-
/// Epsilon value used in the batch normalization formula.
43-
static const real EPS;
44-
4542
/// Load pre-calculated mean and std.
4643
void setMeanAndStd();
4744

0 commit comments

Comments
 (0)