Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
a64af2e
Minor doc tf 2.8 change (#1934)
hwangdeyu May 11, 2022
772dbe6
Use make_tensor_sequence_value_info instead of deprecated make_sequen…
jcwchen May 17, 2022
d4d9f06
skip tfjs 3.17 tests (#1942)
hwangdeyu May 18, 2022
aa83304
Bugfix for concatenating node instead of str. (#1933)
FrankD412 May 18, 2022
cc16eb9
Remove python 3.6 support, upgrade CI (#1940)
hwangdeyu May 19, 2022
d1993a7
Add opset 16 support and check ci (#1937)
hwangdeyu May 19, 2022
880754e
Update the default opset version for tf2onnx. (#1946)
fatcat-z May 25, 2022
e099356
Update the way to check input_signature in from_function(). (#1947)
fatcat-z May 25, 2022
aaab800
Add mapping for tf lite op TFL_BATCH_MATMUL and TFL_MATMUL. (#1950)
fatcat-z May 26, 2022
6f5a673
Update protobuf version in ci (#1951)
hwangdeyu May 27, 2022
29b76df
Add TensorScatterAdd op for opset 16 (#1949)
hwangdeyu May 27, 2022
16eb4b4
Add a new api from_tflite to improve user experience. (#1954)
fatcat-z May 30, 2022
6d774ea
Upgrade opset 16 version related doc (#1953)
hwangdeyu May 31, 2022
e9b6cb4
Remove unuseful sync workflow. (#1955)
fatcat-z May 31, 2022
a8f78ac
increment main to 1.11 version (#1958)
hwangdeyu Jun 6, 2022
9cea907
Transpose optimization for Softmax and LogSoftmax (fixes #1716) (#1964)
janbernloehr Jun 11, 2022
89c4c5c
The from_tflite() function should accept None as default value of inp…
fatcat-z Jun 15, 2022
b027bb2
Change Loop op with maximum iterations input M equals to empty string…
hwangdeyu Jun 17, 2022
1d76297
Update README.md file (#1976)
andife Jun 22, 2022
7a57a6b
Update CONTRIBUTING.md CLA to DCO (#1978)
andife Jun 24, 2022
3cf62e2
Update Keras related tests to support latest TF version. (#1980)
fatcat-z Jun 26, 2022
6905d05
Fix a test issue in keras2onnx_application_tests.yml. (#1982)
fatcat-z Jun 29, 2022
f278249
L2_NORMALIZATION support for tflite (#1989)
shesung Jul 5, 2022
fa0b6cf
Replace deprecated `np.object` with `object` (#1990)
vvolhejn Jul 6, 2022
9ce72be
Add --outputs_as_nchw option to transpose output to from nhwc to nchw…
hwangdeyu Jul 8, 2022
e896723
Fix transpose split optimize attr when opset >=13 (#1996)
hwangdeyu Jul 15, 2022
e7f39ed
Skip existing const initializer node as input in _parse_graph_input (…
q-ycong-p Jul 21, 2022
71105c1
Add handling of HardSigmoid recurrent activation for Keras LSTM (#2001)
q-ycong-p Jul 22, 2022
404e2b7
Add more tests for tf 2.9.x into CI pipelines. (#2009)
fatcat-z Jul 26, 2022
1c7d4ce
Fix problem with adding more than one tf.newaxis at the same time (#2…
southfreebird Jul 27, 2022
d72b4d1
Improve ZerosLike implementation and optimize for opset >= 9 (#2003)
hwangdeyu Jul 27, 2022
f30f41f
Add newly required dependencies for latest ORT version. (#2012)
fatcat-z Jul 28, 2022
087045d
Add support for Python310 and ORT 1.12 (#1975)
hwangdeyu Jul 29, 2022
b65ae84
Increment main to 1.12
hwangdeyu Jul 29, 2022
a587862
upgrade readme tf to 2.9
hwangdeyu Jul 29, 2022
6365d38
ONNX opset 17 with IR version 8 support (#2014)
hwangdeyu Aug 2, 2022
0bfdf63
Remove opset below to 13 ci tests and enhance doc
hwangdeyu Aug 2, 2022
3bd3081
Remove usage of numpy bool aliases for builtins
hwangdeyu Aug 3, 2022
76924df
Use packaging library to avoid DeprecationWarning from distutils
hwangdeyu Aug 3, 2022
4debef7
Turn on graph tf optimize grappler dependency (#2020)
hwangdeyu Aug 12, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 0 additions & 24 deletions .github/workflows/ado-sync-issue.yml

This file was deleted.

21 changes: 13 additions & 8 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,14 +24,19 @@ New code *must* be accompanied by unit tests.
Please see [Coding Conventions and Standards](http://google.github.io/styleguide/pyguide.html)

# Licensing guidelines
This project welcomes contributions and suggestions. Most contributions require you to
agree to a Contributor License Agreement (CLA) declaring that you have the right to,
and actually do, grant us the rights to use your contribution. For details, visit
https://cla-assistant.io/onnx/tensorflow-onnx.

When you submit a pull request, a CLA-bot will automatically determine whether you need
to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the
instructions provided by the bot. You will only need to do this once across all repositories using our CLA.
This project welcomes contributions and suggestions. The contributions require you to
agree the Developer Certificate of Origin (DCO) declaring that you have the right to,
and actually do, grant us the rights to use your contribution.

When you submit a pull request, a DCO-bot will automatically determine whether you need
to provide a DCO and decorate the PR appropriately.

You are ready to sign your code by using the `-s` flag during your commits.

```sh
git commit -s
```


# Code of conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
Expand Down
79 changes: 56 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,16 @@ __Note: tensorflow.js support was just added. While we tested it with many tfjs

TensorFlow has many more ops than ONNX and occasionally mapping a model to ONNX creates issues.

You find a list of supported Tensorflow ops and their mapping to ONNX [here](support_status.md).
You find a list of supported TensorFlow ops and their mapping to ONNX [here](support_status.md).

The common issues we run into we try to document here [Troubleshooting Guide](Troubleshooting.md).

<br/>

| Build Type | OS | Python | Tensorflow | ONNX opset | Status |
| Build Type | OS | Python | TensorFlow | ONNX opset | Status |
| --- | --- | --- | --- | --- | --- |
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.6-3.9 | 1.12-1.15, 2.1-2.7 | 9-15 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master) |
| Unit Test - Full | Linux, MacOS, Windows | 3.6-3.9 | 1.12-1.15, 2.1-2.7 | 9-15 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test-matrix?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=master) | |
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.7-3.10 | 1.13-1.15, 2.1-2.9 | 13-17 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=main)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=main) |
| Unit Test - Full | Linux, MacOS, Windows | 3.7-3.10 | 1.13-1.15, 2.1-2.9 | 13-17 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test-matrix?branchName=main)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=main) | |
<br/>

## Supported Versions
Expand All @@ -27,23 +27,22 @@ The common issues we run into we try to document here [Troubleshooting Guide](Tr

tf2onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found.

We support and test ONNX opset-9 to opset-15. opset-6 to opset-8 should work but we don't test them.
By default we use ```opset-9``` for the resulting ONNX graph since most runtimes will support opset-9.
We support and test ONNX opset-13 to opset-17. opset-6 to opset-12 should work but we don't test them.
By default we use ```opset-13``` for the resulting ONNX graph.

If you want the graph to be generated with a specific opset, use ```--opset``` in the command line, for example ```--opset 13```.

### TensorFlow

We support ```tf-1.x graphs``` and ```tf-2.x```. To keep our test matrix manageable we test tf2onnx running on top of ```tf-1.12 or better```.
We support ```tf-1.x graphs``` and ```tf-2.x```. To keep our test matrix manageable we test tf2onnx running on top of ```tf-1.13 or better```.

When running under tf-2.x tf2onnx will use the tensorflow V2 controlflow.

You can install tf2onnx on top of tf-1.x or tf-2.x.

### Python

We support Python ```3.6-3.9```.
Note that on windows for Python > 3.7 the protobuf package doesn't use the cpp implementation and is very slow - we recommend to use Python 3.7 for that reason.
We support Python ```3.7-3.10```.

## Prerequisites

Expand Down Expand Up @@ -83,7 +82,7 @@ or

```python setup.py develop```

tensorflow-onnx requires onnx-1.5 or better and will install/upgrade onnx if needed.
tensorflow-onnx requires onnx-1.9 or better and will install/upgrade onnx if needed.

To create a wheel for distribution:

Expand All @@ -98,9 +97,9 @@ To get started with `tensorflow-onnx`, run the `t2onnx.convert` command, providi

```python -m tf2onnx.convert --saved-model tensorflow-model-path --output model.onnx```

The above command uses a default of `9` for the ONNX opset. If you need a newer opset, or want to limit your model to use an older opset then you can provide the `--opset` argument to the command. If you are unsure about which opset to use, refer to the [ONNX operator documentation](https://github.com/onnx/onnx/releases).
The above command uses a default of `13` for the ONNX opset. If you need a newer opset, or want to limit your model to use an older opset then you can provide the `--opset` argument to the command. If you are unsure about which opset to use, refer to the [ONNX operator documentation](https://github.com/onnx/onnx/releases).

```python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 13 --output model.onnx```
```python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 17 --output model.onnx```

If your TensorFlow model is in a format other than `saved model`, then you need to provide the inputs and outputs of the model graph.

Expand All @@ -118,7 +117,7 @@ You find an end-to-end tutorial for ssd-mobilenet [here](tutorials/ConvertingSSD

We recently added support for tflite. You convert ```tflite``` models via command line, for example:

```python -m tf2onnx.convert --opset 13 --tflite tflite--file --output model.onnx```
```python -m tf2onnx.convert --opset 16 --tflite tflite--file --output model.onnx```

## CLI reference

Expand Down Expand Up @@ -187,7 +186,7 @@ ONNX requires default values for graph inputs to be constant, while Tensorflow's

#### --opset

By default we use the opset 9 to generate the graph. By specifying ```--opset``` the user can override the default to generate a graph with the desired opset. For example ```--opset 13``` would create a onnx graph that uses only ops available in opset 13. Because older opsets have in most cases fewer ops, some models might not convert on a older opset.
By default we use the opset 13 to generate the graph. By specifying ```--opset``` the user can override the default to generate a graph with the desired opset. For example ```--opset 17``` would create a onnx graph that uses only ops available in opset 17. Because older opsets have in most cases fewer ops, some models might not convert on a older opset.

#### --dequantize

Expand Down Expand Up @@ -268,7 +267,7 @@ optional arguments:
```
```run_pretrained_models.py``` will run the TensorFlow model, captures the TensorFlow output and runs the same test against the specified ONNX backend after converting the model.

If the option ```--perf csv-file``` is specified, we'll capture the timeing for inferece of tensorflow and onnx runtime and write the result into the given csv file.
If the option ```--perf csv-file``` is specified, we'll capture the timing for inference of tensorflow and onnx runtime and write the result into the given csv file.

You call it for example with:
```
Expand All @@ -292,8 +291,8 @@ import tf2onnx
model_proto, external_tensor_storage = tf2onnx.convert.from_keras(model,
input_signature=None, opset=None, custom_ops=None,
custom_op_handlers=None, custom_rewriter=None,
inputs_as_nchw=None, extra_opset=None shape_override=None,
target=None, large_model=False, output_path=None)
inputs_as_nchw=None, outputs_as_nchw=None, extra_opset=None,
shape_override=None, target=None, large_model=False, output_path=None)

Args:
model: the tf.keras model we want to convert
Expand All @@ -307,7 +306,8 @@ model_proto, external_tensor_storage = tf2onnx.convert.from_keras(model,
custom_rewriter: list of custom graph rewriters
extra_opset: list of extra opset's, for example the opset's used by custom ops
shape_override: dict with inputs that override the shapes given by tensorflow
inputs_as_nchw: transpose inputs in list from nchw to nhwc
inputs_as_nchw: transpose inputs in list from nhwc to nchw
outputs_as_nchw: transpose outputs in list from nhwc to nchw
large_model: use the ONNX external tensor storage format
output_path: save model to output_path

Expand All @@ -323,8 +323,8 @@ import tf2onnx

model_proto, external_tensor_storage = tf2onnx.convert.from_function(function,
input_signature=None, opset=None, custom_ops=None,
custom_op_handlers=None, custom_rewriter=None,
inputs_as_nchw=None, extra_opset=None, shape_override=None,
custom_op_handlers=None, custom_rewriter=None, inputs_as_nchw=None,
outputs_as_nchw=None, extra_opset=None, shape_override=None,
target=None, large_model=False, output_path=None)

Args:
Expand All @@ -339,7 +339,8 @@ model_proto, external_tensor_storage = tf2onnx.convert.from_function(function,
custom_rewriter: list of custom graph rewriters
extra_opset: list of extra opset's, for example the opset's used by custom ops
shape_override: dict with inputs that override the shapes given by tensorflow
inputs_as_nchw: transpose inputs in list from nchw to nhwc
inputs_as_nchw: transpose inputs in list from nhwc to nchw
outputs_as_nchw: transpose outputs in list from nhwc to nchw
large_model: use the ONNX external tensor storage format
output_path: save model to output_path

Expand All @@ -354,7 +355,7 @@ import tf2onnx
model_proto, external_tensor_storage = tf2onnx.convert.from_graph_def(graph_def,
name=None, input_names=None, output_names=None, opset=None,
custom_ops=None, custom_op_handlers=None, custom_rewriter=None,
inputs_as_nchw=None, extra_opset=None,
inputs_as_nchw=None, outputs_as_nchw=None, extra_opset=None,
shape_override=None, target=None, large_model=False,
output_path=None)

Expand All @@ -369,7 +370,39 @@ model_proto, external_tensor_storage = tf2onnx.convert.from_graph_def(graph_def,
custom_rewriter: list of custom graph rewriters
extra_opset: list of extra opset's, for example the opset's used by custom ops
shape_override: dict with inputs that override the shapes given by tensorflow
inputs_as_nchw: transpose inputs in list from nchw to nhwc
inputs_as_nchw: transpose inputs in list from nhwc to nchw
outputs_as_nchw: transpose outputs in list from nhwc to nchw
large_model: use the ONNX external tensor storage format
output_path: save model to output_path

Returns:
An ONNX model_proto and an external_tensor_storage dict.
```

### from_tflite
```
import tf2onnx

model_proto, external_tensor_storage = tf2onnx.convert.from_tflite(tflite_path,
input_names=None, output_names=None, opset=None, custom_ops=None, custom_op_handlers=None,
custom_rewriter=None, inputs_as_nchw=None, outputs_as_nchw=None, extra_opset=None,
shape_override=None, target=None, large_model=False, output_path=None):

Args:
tflite_path: the tflite model file full path
input_names: list of input names
output_names: list of output names
opset: the opset to be used for the ONNX model, default is the latest
custom_ops: if a model contains ops not recognized by onnx runtime,
you can tag these ops with a custom op domain so that the
runtime can still open the model. Type is a dictionary `{op name: domain}`.
custom_op_handlers: dictionary of custom ops handlers
custom_rewriter: list of custom graph rewriters
inputs_as_nchw: transpose inputs in list from nhwc to nchw
outputs_as_nchw: transpose outputs in list from nhwc to nchw
extra_opset: list of extra opset's, for example the opset's used by custom ops
shape_override: dict with inputs that override the shapes given by tensorflow
target: list of workarounds applied to help certain platforms
large_model: use the ONNX external tensor storage format
output_path: save model to output_path

Expand Down
6 changes: 3 additions & 3 deletions Troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,11 @@ To get this fixed you can open an issue or send us a PR with a fix.

Sometimes there is no direct mapping from tensorflow to ONNX. We took care are of the most common cases. But for less frequently used ops there might be a mapping missing. To get this fixed there 2 options:

a) in tf2onnx you can compose the op out of different ops. A good example for this the [Erf op](https://github.com/onnx/tensorflow-onnx/blob/master/tf2onnx/onnx_opset/math.py#L317). Before opset-9 this tf2onnx composes Erf with other ONNX ops.
a) in tf2onnx you can compose the op out of different ops. A good example for this the [Erf op](https://github.com/onnx/tensorflow-onnx/blob/main/tf2onnx/onnx_opset/math.py#L317). Before opset-9 this tf2onnx composes Erf with other ONNX ops.

b) You request the missing op to be added to [ONNX](https://github.com/onnx/onnx). After it is added to ONNX and some runtime implements it we'll add it to tf2onnx. You can see that this happened for the Erf Op. Starting with opset-9, ONNX added it - tf2onnx no longer composes the op and instead passes it to ONNX.

c) The op is too complex to compose and it's to exotic to add to ONNX. In that cases you can use a custom op to implement it. Custom ops are documented in the [README](README.md) and there is an example [here](https://github.com/onnx/tensorflow-onnx/blob/master/examples/custom_op_via_python.py). There are 2 flavors of it:
c) The op is too complex to compose and it's to exotic to add to ONNX. In that cases you can use a custom op to implement it. Custom ops are documented in the [README](README.md) and there is an example [here](https://github.com/onnx/tensorflow-onnx/blob/main/examples/custom_op_via_python.py). There are 2 flavors of it:
- you could compose the functionality by using multiple ONNX ops.
- you can implement the op in your runtime as custom op (assuming that most runtimes do have such a mechanism) and then map it in tf2onnx as custom op.

Expand All @@ -31,7 +31,7 @@ c) The op is too complex to compose and it's to exotic to add to ONNX. In that c
There is a common group of errors that reports ```get tensor value: ... must be Const```.
The reason for this is that there is a dynamic input of a tensorflow op but the equivalent ONNX op uses a static attribute. In other words in tensorflow that input is only known at runtime but in ONNX it need to be known at graph creation time.

An example of this is the [ONNX Slice operator before opset-10](https://github.com/onnx/onnx/blob/master/docs/Changelog.md#Slice-1) - the start and end of the slice are static attributes that need to be known at graph creation. In tensorflow the [strided slice op](https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/strided-slice) allows dynamic inputs. tf2onnx will try to find the real value of begin and end of the slice and can find them in most cases. But if those are real dynamic values calculate at runtime it will result in the message ```get tensor value: ... must be Const```.
An example of this is the [ONNX Slice operator before opset-10](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Slice-1) - the start and end of the slice are static attributes that need to be known at graph creation. In tensorflow the [strided slice op](https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/strided-slice) allows dynamic inputs. tf2onnx will try to find the real value of begin and end of the slice and can find them in most cases. But if those are real dynamic values calculate at runtime it will result in the message ```get tensor value: ... must be Const```.

You can pass the options ```--fold_const```(removed after tf2onnx-1.9.3) in the tf2onnx command line that allows tf2onnx to apply more aggressive constant folding which will increase chances to find a constant.

Expand Down
2 changes: 1 addition & 1 deletion VERSION_NUMBER
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.10.0
1.12.0
Loading