You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
tensorflow-onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found.
8
8
9
-
We support opset 6 to 10. By default we use opset 7 for the resulting ONNX graph since most runtimes will support opset 7.
9
+
We support opset 6 to 10. By default we use opset 7 for the resulting ONNX graph since most runtimes will support opset 7. Support for future opsets add added as they are released.
10
10
11
-
If you want the graph to be generated with a newer opset, use ```--opset``` in the command line, for example ```--opset 10```.
11
+
If you want the graph to be generated with a specific opset, use ```--opset``` in the command line, for example ```--opset 10```.
12
12
13
13
# Status
14
-
We support many TensorFlow models. Support for Fully Connected and Convolutional networks is mature. Dynamic LSTM/GRU/Attention networks should work but the code for this is evolving.
15
-
A list of models that we use for testing can be found [here](tests/run_pretrained_models.yaml)
14
+
We support many TensorFlow models. Support for Fully Connected, Convolutional and dynamic LSTM networks is mature.
15
+
A list of models that we use for testing can be found [here](tests/run_pretrained_models.yaml).
16
16
17
17
Supported RNN classes and APIs: LSTMCell, BasicLSTMCell, GRUCell, GRUBlockCell, MultiRNNCell, and user defined RNN cells inheriting rnn_cell_impl.RNNCell, used along with DropoutWrapper, BahdanauAttention, AttentionWrapper.
18
18
Check [tips](examples/rnn_tips.md) when converting RNN models.
19
19
20
+
You find a list of supported Tensorflow ops and their mapping to ONNX [here](support_status.md).
21
+
22
+
Tensorflow has broad functionality and occacional mapping it to ONNX creates issues.
23
+
The common issues we run into we try to document here [Troubleshooting Guide](Troubleshooting.md).
24
+
20
25
# Prerequisites
21
26
22
27
## Install TensorFlow
@@ -64,8 +69,10 @@ python setup.py bdist_wheel
64
69
65
70
# Usage
66
71
67
-
To convert a TensorFlow model, tf2onnx prefers a ```frozen TensorFlow graph``` and the user needs to specify inputs and outputs for the graph by passing the input and output
68
-
names with ```--inputs INPUTS``` and ```--outputs OUTPUTS```.
72
+
You find a end to end tutorial for ssd-mobilenet [here](tutuaials/ConvertingSSDMobilenetToONNX.ipynb).
73
+
74
+
To convert a TensorFlow model, tf2onnx supports ```saved_model```, ```checkpoint``` or ```frozen graph``` formats. We recommend the ```saved_model``` format. If ```checkpoint``` or ```frozen graph``` formats are used, the user needs to specify inputs and outputs for the graph by passing the input and output
75
+
names with ```--inputs INPUTS``` and ```--outputs OUTPUTS```.
69
76
70
77
```
71
78
python -m tf2onnx.convert
@@ -108,7 +115,6 @@ the runtime may support custom ops that are not defined in onnx. A user can aske
108
115
### --fold_const
109
116
when set, TensorFlow fold_constants transformation will be applied before conversion. This will benefit features including Transpose optimization (e.g. Transpose operations introduced during tf-graph-to-onnx-graph conversion will be removed), and RNN unit conversion (for example LSTM). Older TensorFlow version might run into issues with this option depending on the model.
110
117
111
-
112
118
Usage example (run following commands in tensorflow-onnx root directory):
```ValueError: tensorflow op NonMaxSuppression is not supported```
8
+
9
+
means that the given tensorflow op is not mapped to ONNX. This could have multiple reasons:
10
+
11
+
(1) we have not gotten to implement it. NonMaxSuppression is such an example: we implemented NonMaxSuppressionV2 and NonMaxSuppressionV3 but not the older NonMaxSuppression op.
12
+
13
+
To get this fixed you can open an issue or send us a PR with a fix.
14
+
15
+
(2) There is no direct mapping to ONNX.
16
+
17
+
Sometimes there is no direct mapping from tensorflow to ONNX. We took care are of the most common cases. But for less frequently used ops there might be a mapping missing. To get this fixed there 2 options:
18
+
19
+
a) in tf2onnx you can compose the op out of different ops. A good example for this the [Erf op](https://github.com/onnx/tensorflow-onnx/blob/master/tf2onnx/onnx_opset/math.py#L317). Before opset-9 this tf2onnx composes Erf with other ONNX ops.
20
+
21
+
b) You request the missing op to be added to [ONNX](https://github.com/onnx/onnx). After it is added to ONNX and some runtime implements it we'll add it to tf2onnx. You can see that this happened for the Erf Op. Starting with opset-9, ONNX added it - tf2onnx no longer composes the op and instead passes it to ONNX.
22
+
23
+
c) The op is too complex to compose and it's to exotic to add to ONNX. In that cases you can use a custom op to implement it. Custom ops are documented in the [README](README.md) and there is an example [here](https://github.com/onnx/tensorflow-onnx/blob/master/examples/custom_op_via_python.py). There are 2 flavors of it:
24
+
- you could compose the functionality by using multiple ONNX ops.
25
+
- you can implement the op in your runtime as custom op (assuming that most runtimes do have such a mechanism) and then map it in tf2onnx as custom op.
26
+
27
+
## get tensor value: ... must be Const
28
+
29
+
There is a common group of errors that reports ```get tensor value: ... must be Const```.
30
+
The reason for this is that there is a dynamic input of a tensorflow op but the equivalent ONNX op uses a static attribute. In other words in tensorflow that input is only known at runtime but in ONNX it need to be known at graph creation time.
31
+
32
+
An example of this is the [ONNX Slice operator before opset-10](https://github.com/onnx/onnx/blob/master/docs/Changelog.md#Slice-1) - the start and end of the slice are static attributes that need to be known at graph creation. In tensorflow the [strided slice op](https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/strided-slice) allows dynamic inputs. tf2onnx will try to find the real value of begin and end of the slice and can find them in most cases. But if those are real dynamic values calculate at runtime it will result in the message ```get tensor value: ... must be Const```.
33
+
34
+
You can pass the options ```--fold_const``` in the tf2onnx command line that allows tf2onnx to apply more aggressive constant folding which will increase chances to find a constant.
35
+
36
+
If this doesn't work the model is most likely not to be able to convert to ONNX. We used to see this a lot of issue with the ONNX Slice op and in opset-10 was updated for exactly this reason.
0 commit comments