|
1 |
| -tf2onnx - convert TensorFlow models to ONNX models. |
| 1 | +tf2onnx - Convert TensorFlow models to ONNX. |
2 | 2 | ========
|
3 | 3 |
|
4 | 4 | [](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build?definitionId=16&branchName=master)
|
5 | 5 |
|
6 | 6 | # Supported ONNX version
|
7 | 7 | tensorflow-onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found.
|
8 | 8 |
|
9 |
| -We support opset 6 to 10. By default we use opset 7 for the resulting ONNX graph since most runtimes will support opset 7. |
| 9 | +We support opset 6 to 10. By default we use opset 7 for the resulting ONNX graph since most runtimes will support opset 7. Support for future opsets add added as they are released. |
10 | 10 |
|
11 |
| -If you want the graph to be generated with a newer opset, use ```--opset``` in the command line, for example ```--opset 10```. |
| 11 | +If you want the graph to be generated with a specific opset, use ```--opset``` in the command line, for example ```--opset 10```. |
12 | 12 |
|
13 | 13 | # Status
|
14 |
| -We support many TensorFlow models. Support for Fully Connected and Convolutional networks is mature. Dynamic LSTM/GRU/Attention networks should work but the code for this is evolving. |
15 |
| -A list of models that we use for testing can be found [here](tests/run_pretrained_models.yaml) |
| 14 | +We support many TensorFlow models. Support for Fully Connected, Convolutional and dynamic LSTM networks is mature. |
| 15 | +A list of models that we use for testing can be found [here](tests/run_pretrained_models.yaml). |
16 | 16 |
|
17 | 17 | Supported RNN classes and APIs: LSTMCell, BasicLSTMCell, GRUCell, GRUBlockCell, MultiRNNCell, and user defined RNN cells inheriting rnn_cell_impl.RNNCell, used along with DropoutWrapper, BahdanauAttention, AttentionWrapper.
|
18 | 18 | Check [tips](examples/rnn_tips.md) when converting RNN models.
|
19 | 19 |
|
| 20 | +Tensorflow has broad functionality and occacional mapping it to ONNX creates issues. |
| 21 | +The common issues we run into we try to document here [Troubleshooting Guide](Troubleshooting.md). |
| 22 | + |
20 | 23 | # Prerequisites
|
21 | 24 |
|
22 | 25 | ## Install TensorFlow
|
@@ -64,8 +67,10 @@ python setup.py bdist_wheel
|
64 | 67 |
|
65 | 68 | # Usage
|
66 | 69 |
|
67 |
| -To convert a TensorFlow model, tf2onnx prefers a ```frozen TensorFlow graph``` and the user needs to specify inputs and outputs for the graph by passing the input and output |
68 |
| -names with ```--inputs INPUTS``` and ```--outputs OUTPUTS```. |
| 70 | +You find a end to end tutorial for ssd-mobilenet [here](tutuaials/ConvertingSSDMobilenetToONNX.ipynb). |
| 71 | + |
| 72 | +To convert a TensorFlow model, tf2onnx supports ```saved_model```, ```checkpoint``` or ```frozen graph``` formats. We recommend the ```saved_model``` format. If ```checkpoint``` or ```frozen graph``` formats are used, the user needs to specify inputs and outputs for the graph by passing the input and output |
| 73 | +names with ```--inputs INPUTS``` and ```--outputs OUTPUTS```. |
69 | 74 |
|
70 | 75 | ```
|
71 | 76 | python -m tf2onnx.convert
|
@@ -108,7 +113,6 @@ the runtime may support custom ops that are not defined in onnx. A user can aske
|
108 | 113 | ### --fold_const
|
109 | 114 | when set, TensorFlow fold_constants transformation will be applied before conversion. This will benefit features including Transpose optimization (e.g. Transpose operations introduced during tf-graph-to-onnx-graph conversion will be removed), and RNN unit conversion (for example LSTM). Older TensorFlow version might run into issues with this option depending on the model.
|
110 | 115 |
|
111 |
| - |
112 | 116 | Usage example (run following commands in tensorflow-onnx root directory):
|
113 | 117 | ```
|
114 | 118 | python -m tf2onnx.convert\
|
|
0 commit comments