You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* tf-2.6 to ci, reflect tf-2.6 in readme
Signed-off-by: Guenther Schmuelling <[email protected]>
* remove tflite/tfjs 2.6 for now
Signed-off-by: Guenther Schmuelling <[email protected]>
* clear session since we run models in process back to back
Signed-off-by: Guenther Schmuelling <[email protected]>
Copy file name to clipboardExpand all lines: README.md
+15-12Lines changed: 15 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,11 @@
1
1
<!--- SPDX-License-Identifier: Apache-2.0 -->
2
2
3
-
# tf2onnx - Convert TensorFlow, Keras and Tflite models to ONNX.
3
+
# tf2onnx - Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX.
4
4
5
-
tf2onnx converts TensorFlow (tf-1.x or tf-2.x), tf.keras and tflite models to ONNX via command
5
+
tf2onnx converts TensorFlow (tf-1.x or tf-2.x), keras, tensorflow.js and tflite models to ONNX via command
6
6
line or python api.
7
7
8
-
__Note: after tf2onnx-1.8.3 we made a change that impacts the output names for the ONNX model.
9
-
Instead of taking the output names from the tensorflow graph (ie. for keras models this is frequently Identity:0) we decided that it is better to use the structured output names of the model so the output names are now identical to the names in the keras or saved model.__
8
+
__Note: tensorflow.js support was just added. While we tested it with many tfjs models from tfhub, it should be considered experimental.__
10
9
11
10
TensorFlow has many more ops than ONNX and occasionally mapping a model to ONNX creates issues.
12
11
@@ -18,8 +17,8 @@ The common issues we run into we try to document here [Troubleshooting Guide](Tr
18
17
19
18
| Build Type | OS | Python | Tensorflow | ONNX opset | Status |
20
19
| --- | --- | --- | --- | --- | --- |
21
-
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.6-3.9 | 1.12-1.15, 2.1-2.5| 8-14 |[](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master)|
22
-
| Unit Test - Full | Linux, MacOS, Windows | 3.6-3.9 | 1.12-1.15, 2.1-2.5| 8-14 |[](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=master)||
20
+
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.6-3.9 | 1.12-1.15, 2.1-2.6| 8-14 |[](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master)|
21
+
| Unit Test - Full | Linux, MacOS, Windows | 3.6-3.9 | 1.12-1.15, 2.1-2.6| 8-14 |[](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=master)||
23
22
<br/>
24
23
25
24
## Supported Versions
@@ -127,7 +126,8 @@ We recently added support for tflite. You convert ```tflite``` models via comman
127
126
python -m tf2onnx.convert
128
127
--saved-model SOURCE_SAVED_MODEL_PATH |
129
128
--checkpoint SOURCE_CHECKPOINT_METAFILE_PATH |
130
-
--tflite SOURCE_TFLITE_PATH |
129
+
--tflite TFLITE_MODEL_PATH |
130
+
--tfjs TFJS_MODEL_PATH |
131
131
--input | --graphdef SOURCE_GRAPHDEF_PB
132
132
--output TARGET_ONNX_MODEL
133
133
[--inputs GRAPH_INPUTS]
@@ -157,14 +157,18 @@ TensorFlow model as saved_model. We expect the path to the saved_model directory
157
157
158
158
TensorFlow model as checkpoint. We expect the path to the .meta file.
159
159
160
-
#### --tflite
161
-
162
-
Convert a tflite model by providing a path to the .tflite file. Inputs/outputs do not need to be specified.
163
-
164
160
#### --input or --graphdef
165
161
166
162
TensorFlow model as graphdef file.
167
163
164
+
#### --tfjs
165
+
166
+
Convert a tensorflow.js model by providing a path to the .tfjs file. Inputs/outputs do not need to be specified.
167
+
168
+
#### --tflite
169
+
170
+
Convert a tflite model by providing a path to the .tflite file. Inputs/outputs do not need to be specified.
171
+
168
172
#### --output
169
173
170
174
The target onnx file path.
@@ -260,7 +264,6 @@ optional arguments:
260
264
--opset OPSET target opset to use
261
265
--perf csv-file capture performance numbers for tensorflow and onnx runtime
262
266
--debug dump generated graph with shape info
263
-
--fold_const when set, TensorFlow fold_constants transformation will be applied before conversion. This will benefit features including Transpose optimization (e.g. Transpose operations introduced during tf-graph-to-onnx-graph conversion will be removed), and RNN unit conversion (for example LSTM).
264
267
```
265
268
```run_pretrained_models.py``` will run the TensorFlow model, captures the TensorFlow output and runs the same test against the specified ONNX backend after converting the model.
0 commit comments