Skip to content

Commit 49fd648

Browse files
vinitraWenbing Li
authored andcommitted
Doc Update for Opset Versioning Explanation (#200)
* doc updates for grammar and explanations for opset versioning * removing contribution names, as per PR suggestion * adding details, fixing minor formatting errors * added code for checking onnx model version and finished explanation of why target opset might be larger than the model's actual opset * minor formatting and wording updates * text size changes and minor spacing + content clarity edits
1 parent 223529a commit 49fd648

File tree

1 file changed

+43
-30
lines changed

1 file changed

+43
-30
lines changed

README.md

Lines changed: 43 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -12,32 +12,32 @@ ONNXMLTools enables you to convert models from different machine learning toolki
1212
* Keras
1313
* LightGBM (through its scikit-learn interface)
1414

15-
(To convert Tensorflow models to ONNX, see [tensorflow-onnx](https://github.com/onnx/tensorflow-onnx))
16-
(To convert ONNX model to Core ML, see [onnx-coreml](https://github.com/onnx/onnx-coreml))\
17-
If you want the converted model is compatible with certain ONNX version,
18-
please specify the target_opset parameter on invoking convert function,
19-
and the following Keras converter example code shows how it works.
15+
To convert Tensorflow models to ONNX, see [tensorflow-onnx](https://github.com/onnx/tensorflow-onnx).
2016

2117
## Install
22-
You can install latest release of ONNXMLTools from pypi:
18+
You can install latest release of ONNXMLTools from [PyPi](https://pypi.org/project/onnxmltools/):
2319
```
2420
pip install onnxmltools
2521
```
2622
or install from source:
2723
```
2824
pip install git+https://github.com/onnx/onnxmltools
2925
```
30-
If you choose to install `onnxmltools` from its source code, you must set an environment variable `ONNX_ML=1` before installing `onnx` package.
26+
If you choose to install `onnxmltools` from its source code, you must set the environment variable `ONNX_ML=1` before installing the `onnx` package.
3127

3228
## Dependencies
33-
This package uses ONNX, NumPy, and ProtoBuf. If you are converting a model from scikit-learn, Apple Core ML, Keras, or LightGBM, you need the following packages installed respectively:
29+
This package relies on ONNX, NumPy, and ProtoBuf. If you are converting a model from scikit-learn, Core ML, Keras, or LightGBM, you will need an environment with the respective package installed from the list below:
3430
1. scikit-learn
3531
2. CoreMLTools
36-
3. Keras (version 2.0.8 or higher) with corresponding Tensorflow version
32+
3. Keras (version 2.0.8 or higher) with the corresponding Tensorflow version
3733
4. LightGBM (scikit-learn interface)
3834

39-
## Examples
40-
Here is a simple example to convert a Core ML model:
35+
# Examples
36+
If you want the converted ONNX model to be compatible with a certain ONNX version, please specify the target_opset parameter upon invoking the convert function. The following Keras model conversion example demonstrates this below. You can identify the mapping from ONNX Operator Sets (referred to as opsets) to ONNX releases in the [versioning documentation](https://github.com/onnx/onnx/blob/master/docs/Versioning.md#released-versions).
37+
38+
## CoreML to ONNX Conversion
39+
Here is a simple code snippet to convert a Core ML model into an ONNX model.
40+
4141
```python
4242
import onnxmltools
4343
import coremltools
@@ -54,7 +54,10 @@ onnxmltools.utils.save_text(onnx_model, 'example.json')
5454
# Save as protobuf
5555
onnxmltools.utils.save_model(onnx_model, 'example.onnx')
5656
```
57-
Next, we show a simple usage of the Keras converter.
57+
58+
## Keras to ONNX Conversion
59+
Next, we show an example of converting a Keras model into an ONNX model with `target_opset=7`, which corresponds to ONNX release version 1.2.
60+
5861
```python
5962
import onnxmltools
6063
from keras.layers import Input, Dense, Add
@@ -81,23 +84,37 @@ mapped2_2 = sub_model2(input2)
8184
sub_sum = Add()([mapped1_2, mapped2_2])
8285
keras_model = Model(inputs=[input1, input2], output=sub_sum)
8386

84-
# Convert it!
85-
onnx_model = onnxmltools.convert_keras(keras_model, target_opset=8) # target_opset is optional
86-
87+
# Convert it! The target_opset parameter is optional.
88+
onnx_model = onnxmltools.convert_keras(keras_model, target_opset=7)
8789
```
8890

89-
# Tests converted models
91+
# Testing model converters
9092

91-
*onnxmltools* converts models in ONNX format which
93+
*onnxmltools* converts models into the ONNX format which
9294
can be then used to compute predictions with the
93-
backend of your choice. However, there exists a way
94-
to automatically check every converter with
95-
[onnxruntime](https://pypi.org/project/onnxruntime/) or
96-
[onnxruntime-gpu](https://pypi.org/project/onnxruntime-gpu/).
95+
backend of your choice.
96+
97+
## Checking the operator set version of your converted ONNX model
98+
99+
You can check the operator set of your converted ONNX model using [Netron](https://github.com/lutzroeder/Netron), a viewer for Neural Network models. Alternatively, you could identify your converted model's opset version through the following line of code.
100+
101+
```
102+
opset_version = onnx_model.opset_import[0].version
103+
```
104+
105+
If the result from checking your ONNX model's opset is smaller than the `target_opset` number you specified in the onnxmltools.convert function, do not be alarmed. The ONNXMLTools converter works by converting each operator to the ONNX format individually and finding the corresponding opset version that it was most recently updated in. Once all of the operators are converted, the resultant ONNX model has the maximal opset version of all of its operators.
106+
107+
To illustrate this concretely, let's consider a model with two operators, Abs and Add. As of December 2018, [Abs](https://github.com/onnx/onnx/blob/master/docs/Operators.md#abs) was most recently updated in opset 6, and [Add](https://github.com/onnx/onnx/blob/master/docs/Operators.md#add) was most recently updated in opset 7. Therefore, the converted ONNX model's opset will always be 7, even if you request `target_opset=8`. The converter behavior was defined this way to ensure backwards compatibility.
108+
109+
Documentation for the [ONNX Model format](https://github.com/onnx/onnx) and more examples for converting models from different frameworks can be found in the [ONNX tutorials](https://github.com/onnx/tutorials) repository.
97110

98111
## Test all existing converters
99112

100-
This process requires to clone the *onnxmltools* repository.
113+
There exists a way
114+
to automatically check every converter with
115+
[onnxruntime](https://pypi.org/project/onnxruntime/) or
116+
[onnxruntime-gpu](https://pypi.org/project/onnxruntime-gpu/).
117+
This process requires the user to clone the *onnxmltools* repository.
101118
The following command runs all unit tests and generates
102119
dumps of models, inputs, expected outputs and converted models
103120
in folder ``TESTDUMP``.
@@ -106,25 +123,21 @@ in folder ``TESTDUMP``.
106123
python tests/main.py DUMP
107124
```
108125

109-
It requires *onnxruntime*, *numpy* for most of the models,
110-
*pandas* for transform related to text features,
126+
It requires *onnxruntime*, *numpy* for most models,
127+
*pandas* for transforms related to text features, and
111128
*scipy* for sparse features. One test also requires
112129
*keras* to test a custom operator. That means
113130
*sklearn* or any machine learning library is requested.
114131

115132
## Add a new converter
116133

117134
Once the converter is implemented, a unit test is added
118-
to test it works. At the end of the unit test, function
135+
to confirm that it works. At the end of the unit test, function
119136
*dump_data_and_model* or any equivalent function must be called
120137
to dump the expected output and the converted model.
121138
Once these file are generated, a corresponding test must
122139
be added in *tests_backend* to compute the prediction
123140
with the runtime.
124141

125-
126142
# License
127-
[MIT License](LICENSE)
128-
129-
## Acknowledgments
130-
The package was developed by the following engineers and data scientists at Microsoft starting from winter 2017: Zeeshan Ahmed, Wei-Sheng Chin, Aidan Crook, Xavier Dupre, Costin Eseanu, Tom Finley, Lixin Gong, Scott Inglis, Pei Jiang, Ivan Matantsev, Prabhat Roy, M. Zeeshan Siddiqui, Shouheng Yi, Shauheen Zahirazami, Yiwen Zhu, Du Li, Xuan Li, Wenbing Li
143+
[MIT License](LICENSE)

0 commit comments

Comments
 (0)