You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During training or serving, we provide the evaluation function to measure the model performance, e.g., accuracy, precision. In the operator based framework design, the data go through the network pipeline batch by batch. As a result, inside the operator, we only can calculate one minibatch metrics. We need to provide a mechanism to calculate the metrics for each N pass/batch the user wanted.
6
+
7
+
### Evaluator Design
8
+
Currently, every operation is expressed in the graph. we divide the evaluator process into three steps.
9
+
10
+
1. Initialize the metric state and add it into the block.
11
+
12
+
2. Calculate the statistic of the metric state in every mini-batch. The single operator is only responsible for calculating necessary statistics for one mini-batch. For example, accuracy operator only calculate a minibatch data if run once.
13
+
14
+
15
+
3. Merge the mini-batch statistics to form the evaluation result for multiple mini-batches. When it comes to distributed training/Multi-GPU training, aggregate the value from different devices.
16
+
17
+
### Implementation
18
+
This design is shown in python API.
19
+
Each metric operator need to caculate the metric statistic and return the batch aware states, Python side responsible for accumulate the states for each pass.
20
+
21
+
22
+
```python
23
+
classEvaluator(object):
24
+
"""
25
+
Evaluator Base class.
26
+
"""
27
+
def__init__(self, name, **kwargs):
28
+
"""
29
+
Different evaluator may has different metric states. E.g, Accuracy need two variables, total and right sample counts.
30
+
Auc need four variables, `true_positives`,
31
+
`true_negatives`, `false_positives` and `false_negatives`. So every evaluator should create its needed variables and append to main_program
32
+
33
+
The initialization of Evaluator should be responsible for:
34
+
create metric states and append to the main_program
35
+
"""
36
+
pass
37
+
38
+
def_update_ops(self, input, label, **kwargs)
39
+
"""
40
+
Add mini-batch evaluator caculate operators to the main_program.
41
+
Add increment operator to accumulate the metric states.
42
+
"""
43
+
44
+
45
+
defreset(self, executor, reset_program=None):
46
+
"""
47
+
Reset metric states at the begin of each pass/user specified batch number.
48
+
Execute the reset_program to reset the states.
49
+
"""
50
+
51
+
52
+
defeval(self, executor, eval_program=None):
53
+
"""
54
+
Merge the mini-batch statistics to form the evaluation result for multiple mini-batches.
Copy file name to clipboardExpand all lines: doc/design/ops/rnn.md
+33-33Lines changed: 33 additions & 33 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,62 +1,62 @@
1
1
# RNNOp design
2
2
3
-
This document is about an RNN operator which requires that instances in a mini-batch have the same length. We will have a more flexible RNN operator.
3
+
This document describes the RNN (Recurrent Neural Network) operator and how it is implemented in PaddlePaddle. The RNN op requires that all instances in a mini-batch have the same length. We will have a more flexible dynamic RNN operator in the future.
4
4
5
5
## RNN Algorithm Implementation
6
6
7
-
<paligh="center">
7
+
<palign="center">
8
8
<imgsrc="./images/rnn.jpg"/>
9
9
</p>
10
10
11
11
The above diagram shows an RNN unrolled into a full network.
12
12
13
-
There are several important concepts:
13
+
There are several important concepts here:
14
14
15
-
-*step-net*: the sub-graph to run at each step,
16
-
-*memory*, $h_t$, the state of the current step,
17
-
-*ex-memory*, $h_{t-1}$, the state of the previous step,
18
-
-*initial memory value*, the ex-memory of the first step.
15
+
-*step-net*: the sub-graph that runs at each step.
16
+
-*memory*, $h_t$, the state of the current step.
17
+
-*ex-memory*, $h_{t-1}$, the state of the previous step.
18
+
-*initial memory value*, the memory of the first (initial) step.
19
19
20
20
### Step-scope
21
21
22
-
There could be local variables defined in step-nets. PaddlePaddle runtime realizes these variables in *step-scopes*-- scopes created for each step.
22
+
There could be local variables defined in each step-net. PaddlePaddle runtime realizes these variables in *step-scopes*which are created for each step.
23
23
24
-
<paligh="center">
24
+
<palign="center">
25
25
<imgsrc="./images/rnn.png"/><br/>
26
-
Figure 2 the RNN's data flow
26
+
Figure 2 illustrates the RNN's data flow
27
27
</p>
28
28
29
-
Please be aware that all steps run the same step-net. Each step
29
+
Please be aware that every step runs the same step-net. Each step does the following:
30
30
31
-
1.creates the step-scope,
32
-
2.realizes local variables, including step-outputs, in the step-scope, and
33
-
3.runs the step-net, which could use these variables.
31
+
1.Creates the step-scope.
32
+
2.Initializes the local variables including step-outputs, in the step-scope.
33
+
3.Runs the step-net, which uses the above mentioned variables.
34
34
35
-
The RNN operator will compose its output from step outputs in step scopes.
35
+
The RNN operator will compose its output from step outputs in each of the step scopes.
36
36
37
37
### Memory and Ex-memory
38
38
39
-
Let's give more details about memory and ex-memory via a simply example:
39
+
Let's give more details about memory and ex-memory using a simple example:
40
40
41
41
$$
42
42
h_t = U h_{t-1} + W x_t
43
43
$$,
44
44
45
-
where $h_t$ and $h_{t-1}$ are the memory and ex-memory of step $t$'s respectively.
45
+
where $h_t$ and $h_{t-1}$ are the memory and ex-memory (previous memory) of step $t$ respectively.
46
46
47
-
In the implementation, we can make an ex-memory variable either "refers to" the memory variable of the previous step,
48
-
or copy the value of the previous memory value to the current ex-memory variable.
47
+
In the implementation, we can make an ex-memory variable either "refer to" the memory variable of the previous step,
48
+
or copy the memory value of the previous step to the current ex-memory variable.
49
49
50
50
### Usage in Python
51
51
52
52
For more information on Block, please refer to the [design doc](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/block.md).
53
53
54
-
We can define an RNN's step-net using Block:
54
+
We can define an RNN's step-net using a Block:
55
55
56
56
```python
57
57
import paddle as pd
58
58
59
-
X = some_op() # x is some operator's output, and is a LoDTensor
59
+
X = some_op() # x is some operator's output and is a LoDTensor
- `rnn.add_input` indicates the parameter is a variable that will be segmented into step-inputs.
84
-
- `rnn.add_memory` creates a variable used as the memory.
85
-
- `rnn.add_outputs` mark the variables that will be concatenated across steps into the RNN output.
83
+
- `rnn.add_input`: indicates that the parameter is a variable that will be segmented into step-inputs.
84
+
- `rnn.add_memory`: creates a variable used as the memory.
85
+
- `rnn.add_outputs`: marks the variables that will be concatenated across steps into the RNN output.
86
86
87
87
### Nested RNN and LoDTensor
88
88
89
89
An RNN whose step-net includes other RNN operators is known as an *nested RNN*.
90
90
91
-
For example, we could have a 2-level RNN, where the top level corresponds to paragraphs, and the lower level corresponds to sentences.
91
+
For example, we could have a 2-level RNN, where the top level corresponds to paragraphs, and the lower level corresponds to sentences. Each step of the higher level RNN also receives an input from the corresponding step of the lower level, and additionally the output from the previous time step at the same level.
92
92
93
-
The following figure illustrates the feeding of text into the lower level, one sentence each step, and the feeding of step outputs to the top level. The final top level output is about the whole text.
93
+
The following figure illustrates feeding in text into the lower level, one sentence at a step, and the feeding in step outputs to the top level. The final top level output is about the whole text.
in above example, the construction of the `top_level_rnn` calls `lower_level_rnn`. The input is a LoD Tensor. The top level RNN segments input text data into paragraphs, and the lower level RNN segments each paragraph into sentences.
145
+
In the above example, the construction of the `top_level_rnn` calls `lower_level_rnn`. The input is an LoD Tensor. The top level RNN segments input text data into paragraphs, and the lower level RNN segments each paragraph into sentences.
146
146
147
-
By default, the `RNNOp` will concatenate the outputs from all the time steps,
148
-
if the `output_all_steps` set to False, it will only output the final time step.
147
+
By default, the `RNNOp` will concatenate the outputs from all the time steps.
148
+
If the `output_all_steps` is set to False, it will only output the final time step.
0 commit comments