Skip to content

Commit 56b8784

Browse files
author
Varun Arora
committed
Update design doc based on early implementation
1 parent 5550bd1 commit 56b8784

File tree

1 file changed

+35
-36
lines changed

1 file changed

+35
-36
lines changed
Lines changed: 35 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -1,72 +1,74 @@
1-
### Background
1+
# Background
22

3-
[ONNX (Open Neural Network Exchange)](https://github.com/onnx/onnx) bridges different deep learning frameworks by providing an open source graph format for models. The models trained in other frameworks can be converted into the ONNX format to execute inference by utilizing the built-in operators in ONNX. With the inverse conversion, different frameworks can share any models supported by ONNX in principle. Now most mainstream frameworks have joined the ONNX community, e.g. Caffe2, TensorFlow, and MXNet etc. And there is a tendency that more and more vendors begin to support ONNX or even choose ONNX as the only machine learning engine in their devices.
3+
[ONNX (Open Neural Network Exchange)](https://github.com/onnx/onnx) bridges different deep learning frameworks by providing an open source graph format for models. The models trained in other frameworks can be converted into the ONNX format to execute inference by utilizing the built-in operators in ONNX - this is called a **frontend**. With the inverse conversion (called a **backend**), different frameworks can share any models supported by ONNX in principle. Now most mainstream frameworks have joined the ONNX community, e.g. Caffe2, PyTorch, and MXNet etc. And there is a momentum driving more and more vendors to begin supporting ONNX or even choose ONNX as the only machine learning runtime in their devices.
44

5-
Therefore, it is necessary to enable the conversion between PaddlePaddle and ONNX. This design doc aims to implement the convertor, mainly for the ONNX conversion of models in Fluid and possibly including some important models in V2 format in the future. A complete convertor should be bidirectional, but considering the importance, the conversion from Fluid to ONNX will be implemented preferentially.
5+
Therefore, it is necessary to enable the conversion between PaddlePaddle and ONNX. This design doc is aimed at implementing a convertor, mainly for converting between **Fluid** models and ONNX (it is very likely that we may support older v2 models in the future). A complete convertor should be bidirectional - with a frontend AND a backend, but considering the importance, the we will start with the frontend i.e. Fluid models to ONNX models.
66

77
One thing that makes it doable in Fluid's case is the use of a static IR - the `ProgramDesc` - as opposed to a dynamic graph, as created in the cases of frameworks like PyTorch.
88

99

10-
### How it works
10+
# How it works
1111

12-
As the first step, Fluid must cover [all the listed operators](https://github.com/onnx/onnx/blob/master/docs/Operators.md) in ONNX. The complement is being carried out and only a few minor operators need to be newly added or enhanced, which would not postpone the convertor and the test of common models.
12+
ONNX has a [working list of operators](https://github.com/onnx/onnx/blob/master/docs/Operators.md) which is versioned.
1313

14-
About the convertor, several things need to be considered:
14+
When prioritizing implementation of a frontend over a backend, choice of coverage of Fluid -> ONNX operators comes down to choices of models to be supported (see section `Supported models`). Eventually, this will allow us to reach a really-wide coverage of all operators.
1515

16-
- OP-level conversion
17-
- How to map the inputs, attributes, weights, and outputs each operator.
18-
- Data type mapping
19-
- Network representation adapation
20-
- The model in Fluid is represented by nested `Block`, how to parse and reconstruct it in ONNX graph format, and vice versa;
16+
Here are a few major considerations when it comes to converting models:
2117

22-
- Model validation
23-
- To assure the correctness of conversion. A simple way may be to generate some dummy data as the input and compare the inference results.
24-
- Long term support
25-
- As ONNX keeps evolving, a mechanism to make sure long term support is needed.
18+
- **Op-level conversion**: How to map the inputs, attributes, and outputs of each Paddle operator to those of the ONNX operator. In several cases, these require transformations. For each direction (frontend vs. backend), a different conversion mapping is needed.
19+
- **Parameters (weights) initialization**: Setting initial parameters on different nodes.
20+
- **Tensor data type mapping** (Note: Some ONNX data types are not supported in Fluid)
21+
- **Network representation adaption**: Fluid `ProgramDesc` include nested blocks. Since ONNX is free of nesting, the `ProgramDesc` ops need to be traversed to only include ops from the global scope in the root block. The variables used as inputs and outputs should also be in this scope.
22+
- **Model validation**: There are two kinds of validations that are necessary:
23+
1. We need to ensure that the inference outputs of the ops in run inside a model are the same as those when running the ONNX converted ops through an alternative ONNX backend.
24+
2. Checking to see if the generated nodes on the graph are validated by the internal ONNX checkers.
25+
- **Versioning**: ONNX versions its op listing over versions. In fact, it has versioning on 3 different levels: ops, graphs, and ONNX models. This requires that we are conscious about versioning the convertor and updating tests and op convertor logic for each release. It also implies that we release pre-trained ONNX models upon each version release.
2626

27-
### Project structure
27+
28+
# Project structure
2829

2930
<p align="center">
3031
<img src="./images/project_structure.png"/>
3132
</p>
3233

3334
The project contains four important parts:
3435

35-
* **fluid**: The directory that contains wrappers for fluid related APIs. Fluid has provided some low-level APIs to parse or generate the inference model. However, directly using these low-level APIs makes the code tediously long. This module wraps low-level APIs to provide simplied interfaces.
36+
* **fluid**: The directory that contains wrappers for fluid related APIs. Fluid has provided some low-level APIs to parse or generate the inference model. However, directly using these low-level APIs makes the code tediously long. This module wraps low-level APIs to provide simplified interfaces.
37+
38+
* **onnx**: This is a Python package provided by ONNX containing helpers for creating nodes, graphs, and eventually binary protobuf models with initializer parameters.
3639

37-
* **onnx**: ONNX uses protobuf to save computation flow and model weights. This directory consists of scripts responsible for parsing and generating an ONNX binary model.
40+
* **onnx_fluid**: Contains two-way mapping (Fluid -> ONNX ops and ONNX -> Fluid ops). Called from `convert.py`, the program uses this mapping along with modifier functions to construct ONNX nodes with the help of ONNX's `make_node` helper. It also contains mapping between datatypes and tensor deprecation / amplification logic.
3841

39-
* **onnx_fluid**: Concepts in fluid like ```program```, ```block``` etc. don't have direct corresponding concepts in ONNX. Even though both contain the operator concept, the adaption is also necessary for many operators. This directory consists of the most important modules responsible for acutal converting. Adaption for different level concepts should be provided like fluid ```program/block``` to ONNX graph, fluid operators to ONNX operators etc.
42+
* **convert.py**: The interface exposed to users. This will traverse the global program blocks/variables and construct the write-able model.
4043

41-
* **convert.py**: The interface exposed to users.
4244

43-
### Usage
44-
The converter is designed to very easy-to-use. Bidirectional conversion between Fluid inference model and ONNX binary model is supported. Model validation is also provided to verify the correctness of converted model.
45+
# Usage
46+
The converter should be designed to very easy-to-use. Bidirectional conversion between a Fluid inference model and an ONNX binary model will be supported. Model validation will also provided to verify the correctness of converted model.
4547

4648
* Fluid inference model to ONNX binary model
4749

4850
```
49-
python convert.py --input <fluid inference model> --output <ONNX model> --to_validate True
51+
python convert.py --fluid_model <fluid inference model> --onnx_model <ONNX model> validate True
5052
```
5153

5254
The conversion and model validation will be completed consecutively, finally output a readable model structure description. And for the converse conversion, users only need to exchange the input and output.
5355

5456

55-
### Challenges and mitigation
57+
# Challenges and mitigation
5658

57-
#### Cycles
59+
## Cycles
5860

5961
Cycles are unsupported in ONNX. In Paddle, the `while` op is the most prominent example of a cycle.
6062

6163
*Resolution*: We won't support models with `while`s which can't be substituted until ONNX adds support for such ops.
6264

63-
#### Sequences
65+
## Sequences
6466

6567
Sequence processing operators like `sequence_expand`, `sequence_reshape`, `sequence_concat`, and `sequence_pool` are not supported by ONNX as well, because they do not support non-padded datatypes like LoDTensors.
6668

6769
*Resolution*: Since the runtimes using our ONNX exported graphs won't be using LoDTensors in the first place, such sequence operators should be mapped to ONNX ops that will do the necessary transposing ops with the knowledge of the padding and shape of the Tensors.
6870

69-
#### Ops that can't easily be mapped
71+
## Ops that can't easily be mapped
7072

7173
There are ops that just aren't possible to map today:
7274

@@ -101,26 +103,23 @@ There are ops in ONNX whose job can't be accomplished by a single corresponding
101103
*Resolution*: Chain multiple Paddle operators.
102104

103105

104-
#### Lack of LoDTensors
105-
106-
As stated above, ONNX only supports simple Tensor data.
106+
## Lack of LoDTensors
107107

108-
(...)
108+
As stated above, ONNX only supports simple Tensor values.
109109

110-
TBD
110+
*Resolution*: Deprecate to plain old numpy-able tensors.
111111

112112

113-
#### Reconstruction from deprecated ONNX ops
113+
## Reconstruction from deprecated ONNX ops
114114

115115
For higher-level Fluid ops, such as a few offered by the `nn` layer that do not have direct corresponding mappings but can be converted to ONNX by chaining a series of ops without cycles, it would be useful to map them back to the higher-level Fluid ops once converted back from the deprecated ONNX graphs.
116116

117117
*Resolution*: Graphs that have the deprecation from Paddle -> ONNX. When converting back from ONNX, if we encounter the identical graphs by doing a forward search, we can replace the subgraphs with the matching ONNX op.
118118

119119

120-
### Supported models
120+
# Supported models
121121

122-
Potential risks may come from the conversion of sequence-related models, including the LodTensor, ```if/else``` and ```while``` operator.
123-
So a good choice is to focus on some important feedforward models first, then implement some simple recurrent models.
122+
As mentioned above, potential risks may come from the conversion of sequence-related models, including the LodTensor, ```if/else``` and ```while``` operator. So a good choice is to focus on some important feedforward models first, then implement some simple recurrent models.
124123

125124
- Feedforward models: common models selected in PaddleBook, e.g. VGG, ResNet and some other models proposed by application teams.
126125
- Recurrent models: language model, stacked LSTMs etc.

0 commit comments

Comments
 (0)