You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ONNXMLTools enables you to convert models from different machine learning toolkits into [ONNX](https://onnx.ai). Currently the following toolkits are supported:
13
+
12
14
* Tensorflow (a wrapper of [tf2onnx converter](https://github.com/onnx/tensorflow-onnx/))
13
15
* scikit-learn (a wrapper of [skl2onnx converter](https://github.com/onnx/sklearn-onnx/))
14
16
* Apple Core ML
@@ -18,22 +20,30 @@ ONNXMLTools enables you to convert models from different machine learning toolki
18
20
* XGBoost
19
21
* H2O
20
22
* CatBoost
21
-
<p>Pytorch has its builtin ONNX exporter check <ahref="https://pytorch.org/docs/stable/onnx.html">here</a> for details</p>
23
+
24
+
Pytorch has its builtin ONNX exporter check [here](https://pytorch.org/docs/stable/onnx.html) for details.
22
25
23
26
## Install
27
+
24
28
You can install latest release of ONNXMLTools from [PyPi](https://pypi.org/project/onnxmltools/):
If you choose to install `onnxmltools` from its source code, you must set the environment variable `ONNX_ML=1` before installing the `onnx` package.
34
42
35
43
## Dependencies
44
+
36
45
This package relies on ONNX, NumPy, and ProtoBuf. If you are converting a model from scikit-learn, Core ML, Keras, LightGBM, SparkML, XGBoost, H2O, CatBoost or LibSVM, you will need an environment with the respective package installed from the list below:
46
+
37
47
1. scikit-learn
38
48
2. CoreMLTools (version 3.1 or lower)
39
49
3. Keras (version 2.0.8 or higher) with the corresponding Tensorflow version
@@ -47,9 +57,11 @@ This package relies on ONNX, NumPy, and ProtoBuf. If you are converting a model
47
57
ONNXMLTools is tested with Python **3.7+**.
48
58
49
59
# Examples
60
+
50
61
If you want the converted ONNX model to be compatible with a certain ONNX version, please specify the target_opset parameter upon invoking the convert function. The following Keras model conversion example demonstrates this below. You can identify the mapping from ONNX Operator Sets (referred to as opsets) to ONNX releases in the [versioning documentation](https://github.com/onnx/onnx/blob/master/docs/Versioning.md#released-versions).
51
62
52
63
## Keras to ONNX Conversion
64
+
53
65
Next, we show an example of converting a Keras model into an ONNX model with `target_opset=7`, which corresponds to ONNX release version 1.2.
Below is a code snippet to convert a H2O MOJO model into an ONNX model. The only pre-requisity is to have a MOJO model saved on the local file-system.
116
+
117
+
Below is a code snippet to convert a H2O MOJO model into an ONNX model. The only prerequisite is to have a MOJO model saved on the local file-system.
104
118
105
119
```python
106
120
import onnxmltools
@@ -122,7 +136,7 @@ backend of your choice.
122
136
123
137
You can check the operator set of your converted ONNX model using [Netron](https://github.com/lutzroeder/Netron), a viewer for Neural Network models. Alternatively, you could identify your converted model's opset version through the following line of code.
5. (optional) You could save the converted input data for possible debugging or future reuse. See below:
53
59
54
-
5- (optional) You could save the converted input data for possible debugging or future reuse. See below:
55
-
```python
56
-
withopen("input_data", "wb") as f:
57
-
pickle.dump(input, f)
58
-
```
60
+
```python
61
+
withopen("input_data", "wb") as f:
62
+
pickle.dump(input, f)
63
+
```
59
64
60
-
6- And finally run the newly converted ONNX model in the runtime:
61
-
```python
62
-
sess = onnxruntime.InferenceSession(onnx_model)
63
-
output = sess.run(None, input_data)
65
+
6. And finally run the newly converted ONNX model in the runtime:
64
66
65
-
```
66
-
This output may need further conversion back to a DataFrame.
67
+
```python
68
+
sess = onnxruntime.InferenceSession(onnx_model)
69
+
output = sess.run(None, input_data)
70
+
```
67
71
72
+
This output may need further conversion back to a DataFrame.
68
73
69
74
## Known Issues
70
75
71
-
1. Overall invalid data handling is problematic and not implemented in most cases.
72
-
Make sure your data is clean.
76
+
1. Overall invalid data handling is problematic andnot implemented in most cases. Make sure your data is clean.
73
77
74
-
2. OneHotEncoderEstimator must not drop the last bit: OneHotEncoderEstimator has an option
75
-
which you can use to make sure the last bit is included in the vector: `dropLast=False`
78
+
2. When converting `OneHotEncoderModel` to ONNX, if`handleInvalid`isset to `"keep"`, then `dropLast` must be set to `True`. If `handleInvalid`isset to `"error"`, then `dropLast` must be set to `False`.
76
79
77
-
3. Use FloatTensorType for all numbers (instead of Int6t4Tensor or other variations)
80
+
3. Use `FloatTensorType`forall numbers (instead of `Int64Tensor`or other variations)
78
81
79
82
4. Some conversions, such as the one for Word2Vec, can only handle batch size of 1 (one input row)
0 commit comments