Skip to content

Commit 210790d

Browse files
authored
Merge pull request #11521 from luotao1/inference_doc
add doc for inference_transpiler
2 parents 49f23e6 + 8c2a834 commit 210790d

File tree

1 file changed

+41
-20
lines changed

1 file changed

+41
-20
lines changed

python/paddle/fluid/transpiler/inference_transpiler.py

Lines changed: 41 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -19,16 +19,30 @@
1919

2020

2121
class InferenceTranspiler:
22+
'''
23+
Convert the fluid program to optimized inference program.
24+
25+
There are several optimizations, only fuse batch normalization is supported now.
26+
27+
Examples:
28+
29+
.. code-block:: python
30+
31+
# As InferenceTranspiler will modify the original program,
32+
# please clone before use it.
33+
inference_transpiler_program = program.clone()
34+
t = fluid.InferenceTranspiler()
35+
t.transpile(inference_transpiler_program, place)
36+
'''
37+
2238
def transpile(self, program, place, scope=None):
2339
'''
24-
Transpile the program. Support only fuse batch normalization now.
25-
26-
:param program: program to transpile
27-
:type program: Program
28-
:param place: inference place
29-
:type place: Place
30-
:param scope: inference scope
31-
:type scope: Scope or None
40+
Run the transpiler.
41+
42+
Args:
43+
program (Program): program to transpile
44+
place (Place): inference place
45+
scope (Scope|None): inference Scope
3246
'''
3347
if not isinstance(program, Program):
3448
raise TypeError("program should be as Program type")
@@ -49,36 +63,43 @@ def fuse_batch_norm(self, program, place, scope):
4963
can be integrated with them. Doing so will give us a forward acceleration,
5064
especially in environments like mobile or embedded.
5165
52-
For input X:
53-
- Conv process: X = input * W + bias
54-
- Batch norm process: X' = (X - mean) / std
55-
- Scale Process: Y = a * X' + b
66+
For input :math:`X`:
67+
68+
- Conv process: :math:`X = input * W + bias`
69+
- Batch norm process: :math:`X' = (X - mean) / std`
70+
- Scale Process: :math:`Y = a * X' + b`
5671
5772
After fuse into one operation:
5873
59-
Y = (input * W + bias - mean) / std * a + b
60-
= input * a * W / std + ((bias - mean) / std * a + b)
74+
.. math::
75+
76+
Y &= (input * W + bias - mean) / std * a + b \\\\
77+
&= input * a * W / std + ((bias - mean) / std * a + b)
6178
6279
The operator transformation is:
80+
6381
- before:
82+
6483
- conv->batch_norm->any_other_op (bias == 0)
6584
- conv->elementwise_add->batch_norm->any_other_op (bias != 0)
85+
6686
- after:
87+
6788
- conv->elementwise_add->any_other_op
6889
6990
The transpile stages are:
91+
7092
1. insert elementwise_add op when bias == 0.
7193
2. fuse the batch_norm's parameters to conv and elementwise_add operators.
7294
3. remove batch_norm ops which are not used in any other ops.
7395
4. adjust the input of any_other_op to be the output of elementwise_add operator.
7496
5. remove unused variables.
7597
76-
:param program: program to transpile
77-
:type program: Program
78-
:param place: inference place
79-
:type place: Place
80-
:param scope: inference scope
81-
:type scope: Scope
98+
Args:
99+
program (Program): program to transpile
100+
place (Place): inference place
101+
scope (Scope): inference Scope
102+
82103
'''
83104
self.scope = scope
84105
self.place = place

0 commit comments

Comments
 (0)