You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[ONNX (Open Neural Network Exchange)](https://github.com/onnx/onnx) bridges different deep learning frameworks by providing an open source format for models. The models trained in other frameworks can be converted into the ONNX format to execute inference by utilizing the built-in operators in ONNX. With the converse conversion, different frameworks can share any models supported by ONNX in pinciple. Now most mainstream frameworks have joined the ONNX community, e.g. Caffe2, TensorFlow, and MXNet etc. And there is a trendency that more and more vendors begin to support ONNX or even choose ONNX as the only machine learning engine in their devices.
3
+
[ONNX (Open Neural Network Exchange)](https://github.com/onnx/onnx) bridges different deep learning frameworks by providing an open source graph format for models. The models trained in other frameworks can be converted into the ONNX format to execute inference by utilizing the built-in operators in ONNX. With the inverse conversion, different frameworks can share any models supported by ONNX in principle. Now most mainstream frameworks have joined the ONNX community, e.g. Caffe2, TensorFlow, and MXNet etc. And there is a tendency that more and more vendors begin to support ONNX or even choose ONNX as the only machine learning engine in their devices.
4
4
5
-
Therefore, it is necessary to enable the conversion between PaddlePaddle and ONNX. This design doc aims to implement the convertor, mainly for the ONNX conversion of models in Fluid and possibly including some important models in V2 format in the future. A complete convertor should be bidirectional, but considering the importance, the conversion from Fluid to ONNX will be implemented preferentially.
5
+
Therefore, it is necessary to enable the conversion between PaddlePaddle and ONNX. This design doc aims to implement the convertor, mainly for the ONNX conversion of models in Fluid and possibly including some important models in V2 format in the future. A complete convertor should be bidirectional, but considering the importance, the conversion from Fluid to ONNX will be implemented preferentially.
6
+
7
+
One thing that makes it doable in Fluid's case is the use of a static IR - the `ProgramDesc` - as opposed to a dynamic graph, as created in the cases of frameworks like PyTorch.
6
8
7
9
8
10
### How it works
9
11
10
12
As the first step, Fluid must cover [all the listed operators](https://github.com/onnx/onnx/blob/master/docs/Operators.md) in ONNX. The complement is being carried out and only a few minor operators need to be newly added or enhanced, which would not postpone the convertor and the test of common models.
11
13
12
-
About the convertor, several things need to be considered:
14
+
About the convertor, several things need to be considered:
13
15
14
-
- OP-level conversion
16
+
- OP-level conversion
15
17
- How to map the inputs, attributes, weights, and outputs each operator.
16
18
- Data type mapping
17
-
- Network representation adapation
19
+
- Network representation adapation
18
20
- The model in Fluid is represented by nested `Block`, how to parse and reconstruct it in ONNX graph format, and vice versa;
19
21
20
22
- Model validation
@@ -28,15 +30,15 @@ About the convertor, several things need to be considered:
28
30
<imgsrc="./images/project_structure.png"/>
29
31
</p>
30
32
31
-
The project contains four important parts:
33
+
The project contains four important parts:
32
34
33
35
***fluid**: The directory that contains wrappers for fluid related APIs. Fluid has provided some low-level APIs to parse or generate the inference model. However, directly using these low-level APIs makes the code tediously long. This module wraps low-level APIs to provide simplied interfaces.
34
36
35
37
***onnx**: ONNX uses protobuf to save computation flow and model weights. This directory consists of scripts responsible for parsing and generating an ONNX binary model.
36
38
37
39
***onnx_fluid**: Concepts in fluid like ```program```, ```block``` etc. don't have direct corresponding concepts in ONNX. Even though both contain the operator concept, the adaption is also necessary for many operators. This directory consists of the most important modules responsible for acutal converting. Adaption for different level concepts should be provided like fluid ```program/block``` to ONNX graph, fluid operators to ONNX operators etc.
38
40
39
-
***convert.py**: The interface exposed to users.
41
+
***convert.py**: The interface exposed to users.
40
42
41
43
### Usage
42
44
The converter is designed to very easy-to-use. Bidirectional conversion between Fluid inference model and ONNX binary model is supported. Model validation is also provided to verify the correctness of converted model.
@@ -47,13 +49,78 @@ The converter is designed to very easy-to-use. Bidirectional conversion between
The conversion and model validation will be completed consecutively, finally output a readable model structure description. And for the converse conversion, users only need to exchange the input and output.
52
+
The conversion and model validation will be completed consecutively, finally output a readable model structure description. And for the converse conversion, users only need to exchange the input and output.
53
+
54
+
55
+
### Challenges and mitigation
56
+
57
+
#### Cycles
58
+
59
+
Cycles are unsupported in ONNX. In Paddle, the `while` op is the most prominent example of a cycle.
60
+
61
+
*Resolution*: We won't support models with `while`s which can't be substituted until ONNX adds support for such ops.
62
+
63
+
#### Sequences
64
+
65
+
Sequence processing operators like `sequence_expand`, `sequence_reshape`, `sequence_concat`, and `sequence_pool` are not supported by ONNX as well, because they do not support non-padded datatypes like LoDTensors.
66
+
67
+
*Resolution*: Since the runtimes using our ONNX exported graphs won't be using LoDTensors in the first place, such sequence operators should be mapped to ONNX ops that will do the necessary transposing ops with the knowledge of the padding and shape of the Tensors.
68
+
69
+
#### Ops that can't easily be mapped
70
+
71
+
There are ops that just aren't possible to map today:
72
+
73
+
**Control flow operators**
74
+
75
+
Paddle supports control flow ops like `If/Else` and `Switch` (if we ignore the CSP operations like `select` for now). ONNX has `If` support in the experimental phase.
76
+
77
+
*Resolution*: Map Paddle's `If/Else` to ONNX's `If`, but ignore other control flow operators until ONNX brings support for them.
78
+
79
+
80
+
**Non-existent in Fluid**
81
+
82
+
There are several ONNX operators that are not available in Fluid today, e.g. `InstanceNormalization`, `RandomUniform`, `Unsqueeze`, etc.
83
+
84
+
*Resolution*: For the initial phase, we can choose to not support ops that our models don't care for and are subsequently not available in Fluid. However, for ops that we think might be necessary for Fluid users also, we must implement them on our side and support the ONNX conversion to them. This list is TBD.
85
+
86
+
87
+
**Concurrency**
88
+
89
+
ONNX does not have any considerations for concurrency right now.
90
+
91
+
*Resolution*: There are two ways to approach this:
92
+
93
+
a. We choose to not support concurrent models.
94
+
b. We only support `go_op`s (basically threads) shallowly. This could mean that we enqueue `go_op` ops prior to gradient calculations OR even prior to the entire graph, and that's it - since `go_op`s do not have support for backprop anyways. One of the core target use cases of `go_op`: batch reading - can be handled through this approach.
95
+
96
+
97
+
**Overloaded in Fluid**
98
+
99
+
There are ops in ONNX whose job can't be accomplished by a single corresponding Paddle operator (e.g. ), but a collection of operators.
100
+
101
+
*Resolution*: Chain multiple Paddle operators.
102
+
103
+
104
+
#### Lack of LoDTensors
105
+
106
+
As stated above, ONNX only supports simple Tensor data.
107
+
108
+
(...)
109
+
110
+
TBD
111
+
112
+
113
+
#### Reconstruction from deprecated ONNX ops
114
+
115
+
For higher-level Fluid ops, such as a few offered by the `nn` layer that do not have direct corresponding mappings but can be converted to ONNX by chaining a series of ops without cycles, it would be useful to map them back to the higher-level Fluid ops once converted back from the deprecated ONNX graphs.
116
+
117
+
*Resolution*: Graphs that have the deprecation from Paddle -> ONNX. When converting back from ONNX, if we encounter the identical graphs by doing a forward search, we can replace the subgraphs with the matching ONNX op.
51
118
52
119
53
120
### Supported models
54
121
55
122
Potential risks may come from the conversion of sequence-related models, including the LodTensor, ```if/else``` and ```while``` operator.
56
123
So a good choice is to focus on some important feedforward models first, then implement some simple recurrent models.
57
-
124
+
58
125
- Feedforward models: common models selected in PaddleBook, e.g. VGG, ResNet and some other models proposed by application teams.
59
126
- Recurrent models: language model, stacked LSTMs etc.
0 commit comments