Skip to content

Commit f565e5d

Browse files
author
Yibing Liu
committed
Add background and other sections
1 parent be9977d commit f565e5d

File tree

1 file changed

+36
-14
lines changed

1 file changed

+36
-14
lines changed
Lines changed: 36 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,37 +1,59 @@
11
### Backgroud
22

3-
(@kuke)
3+
[ONNX (Open Neural Network Exchange)](https://github.com/onnx/onnx) bridges different deep learning frameworks by providing an open source format for models. The models trained in other frameworks can be converted into the ONNX format to execute inference by utilizing the built-in operators in ONNX. With the converse conversion, different frameworks can share any models supported by ONNX in pinciple. Now most mainstream frameworks have joined the ONNX community, e.g. Caffe2, TensorFlow, and MXNet etc. And there is a trendency that more and more vendors begin to support ONNX or even choose ONNX as the only machine learning engine in their devices.
4+
5+
Therefore, it is necessary to enable the conversion between PaddlePaddle and ONNX. This design doc aims to implement the convertor, mainly for the ONNX conversion of models in Fluid and possibly including some important models in V2 format in the future. A complete convertor should be bidirectional, but considering the importance, the conversion from Fluid to ONNX will be implemented preferentially.
6+
47

58
### How it works
6-
(@kuke)
9+
10+
As the first step, Fluid must cover [all the listed operators](https://github.com/onnx/onnx/blob/master/docs/Operators.md) in ONNX. The complement is being carried out and only a few minor operators need to be newly added or enhanced, which would not postpone the convertor and the test of common models.
11+
12+
About the convertor, several things need to be considered:
13+
14+
- OP-level conversion
15+
- How to map the inputs, attributes, weights, and outputs each operator.
16+
- Data type mapping
17+
- Network representation adapation
18+
- The model in Fluid is represented by nested `Block`, how to parse and reconstruct it in ONNX graph format, and vice versa;
19+
20+
- Model validation
21+
- To assure the correctness of conversion. A simple way may be to generate some dummy data as the input and compare the inference results.
22+
- Long term support
23+
- As ONNX keeps evolving, a mechanism to make sure long term support is needed.
724

825
### Project structure
926

1027
<p align="center">
1128
<img src="./images/project_structure.png"/>
1229
</p>
1330

14-
The project contains four important modules:
31+
The project contains four important parts:
1532

16-
* **fluid**: Contain wrappers for fluid related APIs. Fluid has provided some low-level APIs to parse or generate the inference model. However, directly using these low-level APIs makes the code tediously long. This module wraps low-level APIs to provide simplied interfaces.
33+
* **fluid**: The directory that contains wrappers for fluid related APIs. Fluid has provided some low-level APIs to parse or generate the inference model. However, directly using these low-level APIs makes the code tediously long. This module wraps low-level APIs to provide simplied interfaces.
1734

18-
* **onnx**: ONNX uses proto file to save computation flow and model weights. This module is responsible for parsing and generating ONNX binary model.
35+
* **onnx**: ONNX uses protobuf to save computation flow and model weights. This directory consists of scripts responsible for parsing and generating an ONNX binary model.
1936

20-
* **onnx_fluid**: Concepts in fluid like program, block etc. haven't direct corresponding concepts in ONNX. Even that both contains operator concept, for many operators adaption is also necessary. This module is the most important module responsible for acutal converting. Adaption for different level concepts should be provided like fluid program/block to ONNX graph, fluid operators to ONNX operators etc.
37+
* **onnx_fluid**: Concepts in fluid like ```program```, ```block``` etc. don't have direct corresponding concepts in ONNX. Even though both contain the operator concept, the adaption is also necessary for many operators. This directory consists of the most important modules responsible for acutal converting. Adaption for different level concepts should be provided like fluid ```program/block``` to ONNX graph, fluid operators to ONNX operators etc.
2138

22-
* **convert.py**: Simple top user interface.
39+
* **convert.py**: The interface exposed to users.
2340

2441
### Usage
25-
The converter is very easy to use. Bi-directional conversion between fluid inference model and ONNX binary model is supported. Model validation is also provided to verify the correctness of converted model.
42+
The converter is designed to very easy-to-use. Bidirectional conversion between Fluid inference model and ONNX binary model is supported. Model validation is also provided to verify the correctness of converted model.
2643

27-
* fluid inference model to ONNX binary model
44+
* Fluid inference model to ONNX binary model
2845

29-
`python convert.py --direct fluid2ONNX --whether_do_validation True`
46+
```
47+
python convert.py --input <fluid inference model> --output <ONNX model> --to_validate True
48+
```
3049

31-
* ONNX binary model to fluid inference model
32-
33-
`python convert.py --direct ONNX2fluid --whether_do_validation True`
50+
The conversion and model validation will be completed consecutively, finally output a readable model structure description. And for the converse conversion, users only need to exchange the input and output.
3451

3552

3653
### Supported models
37-
(@kuke)
54+
55+
Potential risks may come from the conversion of sequence-related models, including the LodTensor, ```if/else``` and ```while``` operator.
56+
So a good choice is to focus on some important feedforward models first, then implement some simple recurrent models.
57+
58+
- Feedforward models: common models selected in PaddleBook, e.g. VGG, ResNet and some other models proposed by application teams.
59+
- Recurrent models: language model, stacked LSTMs etc.

0 commit comments

Comments
 (0)