You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Doc Update for Opset Versioning Explanation (#200)
* doc updates for grammar and explanations for opset versioning
* removing contribution names, as per PR suggestion
* adding details, fixing minor formatting errors
* added code for checking onnx model version and finished explanation of why target opset might be larger than the model's actual opset
* minor formatting and wording updates
* text size changes and minor spacing + content clarity edits
If you choose to install `onnxmltools` from its source code, you must set an environment variable `ONNX_ML=1` before installing `onnx` package.
26
+
If you choose to install `onnxmltools` from its source code, you must set the environment variable `ONNX_ML=1` before installing the `onnx` package.
31
27
32
28
## Dependencies
33
-
This package uses ONNX, NumPy, and ProtoBuf. If you are converting a model from scikit-learn, Apple Core ML, Keras, or LightGBM, you need the following packages installed respectively:
29
+
This package relies on ONNX, NumPy, and ProtoBuf. If you are converting a model from scikit-learn, Core ML, Keras, or LightGBM, you will need an environment with the respective package installed from the list below:
34
30
1. scikit-learn
35
31
2. CoreMLTools
36
-
3. Keras (version 2.0.8 or higher) with corresponding Tensorflow version
32
+
3. Keras (version 2.0.8 or higher) with the corresponding Tensorflow version
37
33
4. LightGBM (scikit-learn interface)
38
34
39
-
## Examples
40
-
Here is a simple example to convert a Core ML model:
35
+
# Examples
36
+
If you want the converted ONNX model to be compatible with a certain ONNX version, please specify the target_opset parameter upon invoking the convert function. The following Keras model conversion example demonstrates this below. You can identify the mapping from ONNX Operator Sets (referred to as opsets) to ONNX releases in the [versioning documentation](https://github.com/onnx/onnx/blob/master/docs/Versioning.md#released-versions).
37
+
38
+
## CoreML to ONNX Conversion
39
+
Here is a simple code snippet to convert a Core ML model into an ONNX model.
## Checking the operator set version of your converted ONNX model
98
+
99
+
You can check the operator set of your converted ONNX model using [Netron](https://github.com/lutzroeder/Netron), a viewer for Neural Network models. Alternatively, you could identify your converted model's opset version through the following line of code.
If the result from checking your ONNX model's opset is smaller than the `target_opset` number you specified in the onnxmltools.convert function, do not be alarmed. The ONNXMLTools converter works by converting each operator to the ONNX format individually and finding the corresponding opset version that it was most recently updated in. Once all of the operators are converted, the resultant ONNX model has the maximal opset version of all of its operators.
106
+
107
+
To illustrate this concretely, let's consider a model with two operators, Abs and Add. As of December 2018, [Abs](https://github.com/onnx/onnx/blob/master/docs/Operators.md#abs) was most recently updated in opset 6, and [Add](https://github.com/onnx/onnx/blob/master/docs/Operators.md#add) was most recently updated in opset 7. Therefore, the converted ONNX model's opset will always be 7, even if you request `target_opset=8`. The converter behavior was defined this way to ensure backwards compatibility.
108
+
109
+
Documentation for the [ONNX Model format](https://github.com/onnx/onnx) and more examples for converting models from different frameworks can be found in the [ONNX tutorials](https://github.com/onnx/tutorials) repository.
97
110
98
111
## Test all existing converters
99
112
100
-
This process requires to clone the *onnxmltools* repository.
113
+
There exists a way
114
+
to automatically check every converter with
115
+
[onnxruntime](https://pypi.org/project/onnxruntime/) or
This process requires the user to clone the *onnxmltools* repository.
101
118
The following command runs all unit tests and generates
102
119
dumps of models, inputs, expected outputs and converted models
103
120
in folder ``TESTDUMP``.
@@ -106,25 +123,21 @@ in folder ``TESTDUMP``.
106
123
python tests/main.py DUMP
107
124
```
108
125
109
-
It requires *onnxruntime*, *numpy* for most of the models,
110
-
*pandas* for transform related to text features,
126
+
It requires *onnxruntime*, *numpy* for most models,
127
+
*pandas* for transforms related to text features, and
111
128
*scipy* for sparse features. One test also requires
112
129
*keras* to test a custom operator. That means
113
130
*sklearn* or any machine learning library is requested.
114
131
115
132
## Add a new converter
116
133
117
134
Once the converter is implemented, a unit test is added
118
-
to test it works. At the end of the unit test, function
135
+
to confirm that it works. At the end of the unit test, function
119
136
*dump_data_and_model* or any equivalent function must be called
120
137
to dump the expected output and the converted model.
121
138
Once these file are generated, a corresponding test must
122
139
be added in *tests_backend* to compute the prediction
123
140
with the runtime.
124
141
125
-
126
142
# License
127
-
[MIT License](LICENSE)
128
-
129
-
## Acknowledgments
130
-
The package was developed by the following engineers and data scientists at Microsoft starting from winter 2017: Zeeshan Ahmed, Wei-Sheng Chin, Aidan Crook, Xavier Dupre, Costin Eseanu, Tom Finley, Lixin Gong, Scott Inglis, Pei Jiang, Ivan Matantsev, Prabhat Roy, M. Zeeshan Siddiqui, Shouheng Yi, Shauheen Zahirazami, Yiwen Zhu, Du Li, Xuan Li, Wenbing Li
0 commit comments