You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+27-18Lines changed: 27 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,15 +33,15 @@ onnxruntime (only avaliable on linux):
33
33
34
34
```pip install onnxruntime```
35
35
36
-
For caffe2, follow the instructions here:
36
+
For pytorch/caffe2, follow the instructions here:
37
37
38
-
```https://caffe2.ai/```
38
+
```https://pytorch.org/```
39
39
40
40
41
-
We tested with caffe2 and onnxruntime and unit tests are passing for those.
41
+
We tested with pytorch/caffe2 and onnxruntime and unit tests are passing for those.
42
42
43
43
## Supported Tensorflow and Python Versions
44
-
We tested with tensorflow 1.5-1.11 and anaconda **3.5,3.6**.
44
+
We tested with tensorflow 1.5-1.12 and anaconda **3.5,3.6**.
45
45
46
46
# Installation
47
47
## From Pypi
@@ -64,13 +64,17 @@ python setup.py bdist_wheel
64
64
65
65
# Usage
66
66
67
-
To convert a TensorFlow model, tf2onnx expects a ```frozen TensorFlow graph``` and the user needs to specify inputs and outputs for the graph by passing the input and output
67
+
To convert a TensorFlow model, tf2onnx prefers a ```frozen TensorFlow graph``` and the user needs to specify inputs and outputs for the graph by passing the input and output
68
68
names with ```--inputs INPUTS``` and ```--outputs OUTPUTS```.
frozen TensorFlow graph, which can be created with the [freeze graph tool](#freeze_graph).
88
-
### output
90
+
### --input or --graphdef
91
+
TensorFlow model as graphdef file. If not already frozen we'll try to freeze the model.
92
+
More information about freezing can be found here: [freeze graph tool](#freeze_graph).
93
+
### --checkpoint
94
+
TensorFlow model as checkpoint. We expect the path to the .meta file. tf2onnx will try to freeze the graph.
95
+
### --saved-model
96
+
TensorFlow model as saved_model. We expect the path to the saved_model directory. tf2onnx will try to freeze the graph.
97
+
### --output
89
98
the target onnx file path.
90
-
### inputs, outputs
91
-
Tensorflow graph's input/output names, which can be found with [summarize graph tool](#summarize_graph). Those names typically end on ```:0```, for example ```--inputs input0:0,input1:0```
92
-
### inputs-as-nchw
99
+
### --inputs, --outputs
100
+
Tensorflow model's input/output names, which can be found with [summarize graph tool](#summarize_graph). Those names typically end on ```:0```, for example ```--inputs input0:0,input1:0```. inputs and outputs are ***not*** needed for models in saved-model format.
101
+
### --inputs-as-nchw
93
102
By default we preserve the image format of inputs (nchw or nhwc) as given in the TensorFlow model. If your hosts (for example windows) native format nchw and the model is written for nhwc, ```--inputs-as-nchw``` tensorflow-onnx will transpose the input. Doing so is convinient for the application and the converter in many cases can optimize the transpose away. For example ```--inputs input0:0,input1:0 --inputs-as-nchw input0:0``` assumes that images are passed into ```input0:0``` as nchw while the TensorFlow model given uses nhwc.
94
-
### target
103
+
### --target
95
104
Some runtimes need workarounds, for example they don't support all types given in the onnx spec. We'll workaround it in some cases by generating a different graph. Those workarounds are activated with ```--target TARGET```.
96
-
### opset
105
+
### --opset
97
106
by default we uses the newest opset 7 to generate the graph. By specifieing ```--opset``` the user can override the default to generate a graph with the desired opset. For example ```--opset 5``` would create a onnx graph that uses only ops available in opset 5. Because older opsets have in most cases fewer ops, some models might not convert on a older opset.
98
-
### custom-ops
107
+
### --custom-ops
99
108
the runtime may support custom ops that are not defined in onnx. A user can asked the converter to map to custom ops by listing them with the --custom-ops option. Tensorflow ops listed here will be mapped to a custom op with the same name as the tensorflow op but in the onnx domain ai.onnx.converters.tensorflow. For example: ```--custom-ops Print``` will insert a op ```Print``` in the onnx domain ```ai.onnx.converters.tensorflow``` into the graph. We also support a python api for custom ops documented later in this readme.
100
-
### fold_const
109
+
### --fold_const
101
110
when set, TensorFlow fold_constants transformation will be applied before conversion. This will benefit features including Transpose optimization (e.g. Transpose operations introduced during tf-graph-to-onnx-graph conversion will be removed), and RNN unit conversion (for example LSTM). Older TensorFlow version might run into issues with this option depending on the model.
0 commit comments