diff --git a/.github/workflows/ado-sync-issue.yml b/.github/workflows/ado-sync-issue.yml
deleted file mode 100644
index 7cf0c9049..000000000
--- a/.github/workflows/ado-sync-issue.yml
+++ /dev/null
@@ -1,24 +0,0 @@
-name: Sync issue to Azure DevOps work item
-
-on:
- issues:
- types:
- [opened, edited, deleted, closed, reopened, labeled, unlabeled, assigned]
-
-jobs:
- alert:
- runs-on: ubuntu-latest
- steps:
- - uses: onnx/tensorflow-onnx@main
- env:
- ado_token: "${{ secrets.ADO_PERSONAL_ACCESS_TOKEN }}"
- ado_organization: "msdata"
- ado_project: "Vienna"
- ado_area_path: "Vienna\\ONNX Runtime\\Shared Core\\Converters\\TensorFlow"
- ado_iteration_path: "Vienna\\Backlog"
- ado_wit: "Product Backlog Item"
- ado_new_state: "New"
- ado_active_state: "Committed"
- ado_close_state: "Done"
- github_token: "${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}"
- log_level: 100
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index c7d863e1b..d30338f50 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -24,14 +24,19 @@ New code *must* be accompanied by unit tests.
Please see [Coding Conventions and Standards](http://google.github.io/styleguide/pyguide.html)
# Licensing guidelines
-This project welcomes contributions and suggestions. Most contributions require you to
-agree to a Contributor License Agreement (CLA) declaring that you have the right to,
-and actually do, grant us the rights to use your contribution. For details, visit
-https://cla-assistant.io/onnx/tensorflow-onnx.
-
-When you submit a pull request, a CLA-bot will automatically determine whether you need
-to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the
-instructions provided by the bot. You will only need to do this once across all repositories using our CLA.
+This project welcomes contributions and suggestions. The contributions require you to
+agree the Developer Certificate of Origin (DCO) declaring that you have the right to,
+and actually do, grant us the rights to use your contribution.
+
+When you submit a pull request, a DCO-bot will automatically determine whether you need
+to provide a DCO and decorate the PR appropriately.
+
+You are ready to sign your code by using the `-s` flag during your commits.
+
+```sh
+git commit -s
+```
+
# Code of conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
diff --git a/README.md b/README.md
index fb5d52ed3..912f3101f 100644
--- a/README.md
+++ b/README.md
@@ -9,16 +9,16 @@ __Note: tensorflow.js support was just added. While we tested it with many tfjs
TensorFlow has many more ops than ONNX and occasionally mapping a model to ONNX creates issues.
-You find a list of supported Tensorflow ops and their mapping to ONNX [here](support_status.md).
+You find a list of supported TensorFlow ops and their mapping to ONNX [here](support_status.md).
The common issues we run into we try to document here [Troubleshooting Guide](Troubleshooting.md).
-| Build Type | OS | Python | Tensorflow | ONNX opset | Status |
+| Build Type | OS | Python | TensorFlow | ONNX opset | Status |
| --- | --- | --- | --- | --- | --- |
-| Unit Test - Basic | Linux, MacOS\*, Windows\* | 3.6-3.9 | 1.12-1.15, 2.1-2.7 | 9-15 | [](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master) |
-| Unit Test - Full | Linux, MacOS, Windows | 3.6-3.9 | 1.12-1.15, 2.1-2.7 | 9-15 | [](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=master) | |
+| Unit Test - Basic | Linux, MacOS\*, Windows\* | 3.7-3.10 | 1.13-1.15, 2.1-2.9 | 13-17 | [](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=main) |
+| Unit Test - Full | Linux, MacOS, Windows | 3.7-3.10 | 1.13-1.15, 2.1-2.9 | 13-17 | [](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=main) | |
## Supported Versions
@@ -27,14 +27,14 @@ The common issues we run into we try to document here [Troubleshooting Guide](Tr
tf2onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found.
-We support and test ONNX opset-9 to opset-15. opset-6 to opset-8 should work but we don't test them.
-By default we use ```opset-9``` for the resulting ONNX graph since most runtimes will support opset-9.
+We support and test ONNX opset-13 to opset-17. opset-6 to opset-12 should work but we don't test them.
+By default we use ```opset-13``` for the resulting ONNX graph.
If you want the graph to be generated with a specific opset, use ```--opset``` in the command line, for example ```--opset 13```.
### TensorFlow
-We support ```tf-1.x graphs``` and ```tf-2.x```. To keep our test matrix manageable we test tf2onnx running on top of ```tf-1.12 or better```.
+We support ```tf-1.x graphs``` and ```tf-2.x```. To keep our test matrix manageable we test tf2onnx running on top of ```tf-1.13 or better```.
When running under tf-2.x tf2onnx will use the tensorflow V2 controlflow.
@@ -42,8 +42,7 @@ You can install tf2onnx on top of tf-1.x or tf-2.x.
### Python
-We support Python ```3.6-3.9```.
-Note that on windows for Python > 3.7 the protobuf package doesn't use the cpp implementation and is very slow - we recommend to use Python 3.7 for that reason.
+We support Python ```3.7-3.10```.
## Prerequisites
@@ -83,7 +82,7 @@ or
```python setup.py develop```
-tensorflow-onnx requires onnx-1.5 or better and will install/upgrade onnx if needed.
+tensorflow-onnx requires onnx-1.9 or better and will install/upgrade onnx if needed.
To create a wheel for distribution:
@@ -98,9 +97,9 @@ To get started with `tensorflow-onnx`, run the `t2onnx.convert` command, providi
```python -m tf2onnx.convert --saved-model tensorflow-model-path --output model.onnx```
-The above command uses a default of `9` for the ONNX opset. If you need a newer opset, or want to limit your model to use an older opset then you can provide the `--opset` argument to the command. If you are unsure about which opset to use, refer to the [ONNX operator documentation](https://github.com/onnx/onnx/releases).
+The above command uses a default of `13` for the ONNX opset. If you need a newer opset, or want to limit your model to use an older opset then you can provide the `--opset` argument to the command. If you are unsure about which opset to use, refer to the [ONNX operator documentation](https://github.com/onnx/onnx/releases).
-```python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 13 --output model.onnx```
+```python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 17 --output model.onnx```
If your TensorFlow model is in a format other than `saved model`, then you need to provide the inputs and outputs of the model graph.
@@ -118,7 +117,7 @@ You find an end-to-end tutorial for ssd-mobilenet [here](tutorials/ConvertingSSD
We recently added support for tflite. You convert ```tflite``` models via command line, for example:
-```python -m tf2onnx.convert --opset 13 --tflite tflite--file --output model.onnx```
+```python -m tf2onnx.convert --opset 16 --tflite tflite--file --output model.onnx```
## CLI reference
@@ -187,7 +186,7 @@ ONNX requires default values for graph inputs to be constant, while Tensorflow's
#### --opset
-By default we use the opset 9 to generate the graph. By specifying ```--opset``` the user can override the default to generate a graph with the desired opset. For example ```--opset 13``` would create a onnx graph that uses only ops available in opset 13. Because older opsets have in most cases fewer ops, some models might not convert on a older opset.
+By default we use the opset 13 to generate the graph. By specifying ```--opset``` the user can override the default to generate a graph with the desired opset. For example ```--opset 17``` would create a onnx graph that uses only ops available in opset 17. Because older opsets have in most cases fewer ops, some models might not convert on a older opset.
#### --dequantize
@@ -268,7 +267,7 @@ optional arguments:
```
```run_pretrained_models.py``` will run the TensorFlow model, captures the TensorFlow output and runs the same test against the specified ONNX backend after converting the model.
-If the option ```--perf csv-file``` is specified, we'll capture the timeing for inferece of tensorflow and onnx runtime and write the result into the given csv file.
+If the option ```--perf csv-file``` is specified, we'll capture the timing for inference of tensorflow and onnx runtime and write the result into the given csv file.
You call it for example with:
```
@@ -292,8 +291,8 @@ import tf2onnx
model_proto, external_tensor_storage = tf2onnx.convert.from_keras(model,
input_signature=None, opset=None, custom_ops=None,
custom_op_handlers=None, custom_rewriter=None,
- inputs_as_nchw=None, extra_opset=None shape_override=None,
- target=None, large_model=False, output_path=None)
+ inputs_as_nchw=None, outputs_as_nchw=None, extra_opset=None,
+ shape_override=None, target=None, large_model=False, output_path=None)
Args:
model: the tf.keras model we want to convert
@@ -307,7 +306,8 @@ model_proto, external_tensor_storage = tf2onnx.convert.from_keras(model,
custom_rewriter: list of custom graph rewriters
extra_opset: list of extra opset's, for example the opset's used by custom ops
shape_override: dict with inputs that override the shapes given by tensorflow
- inputs_as_nchw: transpose inputs in list from nchw to nhwc
+ inputs_as_nchw: transpose inputs in list from nhwc to nchw
+ outputs_as_nchw: transpose outputs in list from nhwc to nchw
large_model: use the ONNX external tensor storage format
output_path: save model to output_path
@@ -323,8 +323,8 @@ import tf2onnx
model_proto, external_tensor_storage = tf2onnx.convert.from_function(function,
input_signature=None, opset=None, custom_ops=None,
- custom_op_handlers=None, custom_rewriter=None,
- inputs_as_nchw=None, extra_opset=None, shape_override=None,
+ custom_op_handlers=None, custom_rewriter=None, inputs_as_nchw=None,
+ outputs_as_nchw=None, extra_opset=None, shape_override=None,
target=None, large_model=False, output_path=None)
Args:
@@ -339,7 +339,8 @@ model_proto, external_tensor_storage = tf2onnx.convert.from_function(function,
custom_rewriter: list of custom graph rewriters
extra_opset: list of extra opset's, for example the opset's used by custom ops
shape_override: dict with inputs that override the shapes given by tensorflow
- inputs_as_nchw: transpose inputs in list from nchw to nhwc
+ inputs_as_nchw: transpose inputs in list from nhwc to nchw
+ outputs_as_nchw: transpose outputs in list from nhwc to nchw
large_model: use the ONNX external tensor storage format
output_path: save model to output_path
@@ -354,7 +355,7 @@ import tf2onnx
model_proto, external_tensor_storage = tf2onnx.convert.from_graph_def(graph_def,
name=None, input_names=None, output_names=None, opset=None,
custom_ops=None, custom_op_handlers=None, custom_rewriter=None,
- inputs_as_nchw=None, extra_opset=None,
+ inputs_as_nchw=None, outputs_as_nchw=None, extra_opset=None,
shape_override=None, target=None, large_model=False,
output_path=None)
@@ -369,7 +370,39 @@ model_proto, external_tensor_storage = tf2onnx.convert.from_graph_def(graph_def,
custom_rewriter: list of custom graph rewriters
extra_opset: list of extra opset's, for example the opset's used by custom ops
shape_override: dict with inputs that override the shapes given by tensorflow
- inputs_as_nchw: transpose inputs in list from nchw to nhwc
+ inputs_as_nchw: transpose inputs in list from nhwc to nchw
+ outputs_as_nchw: transpose outputs in list from nhwc to nchw
+ large_model: use the ONNX external tensor storage format
+ output_path: save model to output_path
+
+ Returns:
+ An ONNX model_proto and an external_tensor_storage dict.
+```
+
+### from_tflite
+```
+import tf2onnx
+
+model_proto, external_tensor_storage = tf2onnx.convert.from_tflite(tflite_path,
+ input_names=None, output_names=None, opset=None, custom_ops=None, custom_op_handlers=None,
+ custom_rewriter=None, inputs_as_nchw=None, outputs_as_nchw=None, extra_opset=None,
+ shape_override=None, target=None, large_model=False, output_path=None):
+
+ Args:
+ tflite_path: the tflite model file full path
+ input_names: list of input names
+ output_names: list of output names
+ opset: the opset to be used for the ONNX model, default is the latest
+ custom_ops: if a model contains ops not recognized by onnx runtime,
+ you can tag these ops with a custom op domain so that the
+ runtime can still open the model. Type is a dictionary `{op name: domain}`.
+ custom_op_handlers: dictionary of custom ops handlers
+ custom_rewriter: list of custom graph rewriters
+ inputs_as_nchw: transpose inputs in list from nhwc to nchw
+ outputs_as_nchw: transpose outputs in list from nhwc to nchw
+ extra_opset: list of extra opset's, for example the opset's used by custom ops
+ shape_override: dict with inputs that override the shapes given by tensorflow
+ target: list of workarounds applied to help certain platforms
large_model: use the ONNX external tensor storage format
output_path: save model to output_path
diff --git a/Troubleshooting.md b/Troubleshooting.md
index 7eea0f50c..fd9ef4a6a 100644
--- a/Troubleshooting.md
+++ b/Troubleshooting.md
@@ -18,11 +18,11 @@ To get this fixed you can open an issue or send us a PR with a fix.
Sometimes there is no direct mapping from tensorflow to ONNX. We took care are of the most common cases. But for less frequently used ops there might be a mapping missing. To get this fixed there 2 options:
-a) in tf2onnx you can compose the op out of different ops. A good example for this the [Erf op](https://github.com/onnx/tensorflow-onnx/blob/master/tf2onnx/onnx_opset/math.py#L317). Before opset-9 this tf2onnx composes Erf with other ONNX ops.
+a) in tf2onnx you can compose the op out of different ops. A good example for this the [Erf op](https://github.com/onnx/tensorflow-onnx/blob/main/tf2onnx/onnx_opset/math.py#L317). Before opset-9 this tf2onnx composes Erf with other ONNX ops.
b) You request the missing op to be added to [ONNX](https://github.com/onnx/onnx). After it is added to ONNX and some runtime implements it we'll add it to tf2onnx. You can see that this happened for the Erf Op. Starting with opset-9, ONNX added it - tf2onnx no longer composes the op and instead passes it to ONNX.
-c) The op is too complex to compose and it's to exotic to add to ONNX. In that cases you can use a custom op to implement it. Custom ops are documented in the [README](README.md) and there is an example [here](https://github.com/onnx/tensorflow-onnx/blob/master/examples/custom_op_via_python.py). There are 2 flavors of it:
+c) The op is too complex to compose and it's to exotic to add to ONNX. In that cases you can use a custom op to implement it. Custom ops are documented in the [README](README.md) and there is an example [here](https://github.com/onnx/tensorflow-onnx/blob/main/examples/custom_op_via_python.py). There are 2 flavors of it:
- you could compose the functionality by using multiple ONNX ops.
- you can implement the op in your runtime as custom op (assuming that most runtimes do have such a mechanism) and then map it in tf2onnx as custom op.
@@ -31,7 +31,7 @@ c) The op is too complex to compose and it's to exotic to add to ONNX. In that c
There is a common group of errors that reports ```get tensor value: ... must be Const```.
The reason for this is that there is a dynamic input of a tensorflow op but the equivalent ONNX op uses a static attribute. In other words in tensorflow that input is only known at runtime but in ONNX it need to be known at graph creation time.
-An example of this is the [ONNX Slice operator before opset-10](https://github.com/onnx/onnx/blob/master/docs/Changelog.md#Slice-1) - the start and end of the slice are static attributes that need to be known at graph creation. In tensorflow the [strided slice op](https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/strided-slice) allows dynamic inputs. tf2onnx will try to find the real value of begin and end of the slice and can find them in most cases. But if those are real dynamic values calculate at runtime it will result in the message ```get tensor value: ... must be Const```.
+An example of this is the [ONNX Slice operator before opset-10](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Slice-1) - the start and end of the slice are static attributes that need to be known at graph creation. In tensorflow the [strided slice op](https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/strided-slice) allows dynamic inputs. tf2onnx will try to find the real value of begin and end of the slice and can find them in most cases. But if those are real dynamic values calculate at runtime it will result in the message ```get tensor value: ... must be Const```.
You can pass the options ```--fold_const```(removed after tf2onnx-1.9.3) in the tf2onnx command line that allows tf2onnx to apply more aggressive constant folding which will increase chances to find a constant.
diff --git a/VERSION_NUMBER b/VERSION_NUMBER
index 81c871de4..0eed1a29e 100644
--- a/VERSION_NUMBER
+++ b/VERSION_NUMBER
@@ -1 +1 @@
-1.10.0
+1.12.0
diff --git a/ci_build/azure_pipelines/keras2onnx_application_tests.yml b/ci_build/azure_pipelines/keras2onnx_application_tests.yml
index df15f0e8e..5a8bdf6c0 100644
--- a/ci_build/azure_pipelines/keras2onnx_application_tests.yml
+++ b/ci_build/azure_pipelines/keras2onnx_application_tests.yml
@@ -8,43 +8,7 @@ jobs:
vmImage: 'ubuntu-latest'
strategy:
matrix:
- Python36-onnx1.5:
- python.version: '3.6'
- ONNX_PATH: onnx==1.5.0
- INSTALL_KERAS: pip install keras==2.2.4
- UNINSTALL_KERAS:
- INSTALL_TENSORFLOW: pip install tensorflow==1.15.0
- INSTALL_ORT: pip install onnxruntime==1.8.0
- INSTALL_KERAS_RESNET: pip install keras-resnet
- INSTALL_TRANSFORMERS:
- INSTALL_NUMPY:
- NIGHTLY_BUILD_TEST: python run_all.py --exclude "test_keras_applications_v2.py"
-
- Python37-onnx1.6:
- python.version: '3.7'
- ONNX_PATH: onnx==1.6.0
- INSTALL_KERAS: pip install keras==2.3.1
- UNINSTALL_KERAS:
- INSTALL_TENSORFLOW: pip install tensorflow==1.15.0
- INSTALL_ORT: pip install onnxruntime==1.8.0
- INSTALL_KERAS_RESNET: pip install keras-resnet
- INSTALL_TRANSFORMERS:
- INSTALL_NUMPY:
- NIGHTLY_BUILD_TEST: python run_all.py --exclude "test_keras_applications_v2.py"
-
- Python37-onnx1.9:
- python.version: '3.7'
- ONNX_PATH: onnx==1.9.0
- INSTALL_KERAS: pip install keras==2.3.1
- UNINSTALL_KERAS:
- INSTALL_TENSORFLOW: pip install tensorflow==1.15.0
- INSTALL_ORT: pip install onnxruntime==1.9.0
- INSTALL_KERAS_RESNET: pip install keras-resnet
- INSTALL_TRANSFORMERS:
- INSTALL_NUMPY:
- NIGHTLY_BUILD_TEST: python run_all.py --exclude "test_keras_applications_v2.py"
-
- Python37-onnx1.11:
+ Python37-onnx1.11-tf1.15:
python.version: '3.7'
ONNX_PATH: onnx==1.11.0
INSTALL_KERAS: pip install keras==2.3.1
@@ -56,18 +20,42 @@ jobs:
INSTALL_NUMPY: pip install numpy==1.19.0
NIGHTLY_BUILD_TEST: python run_all_v2.py --exclude "test_keras_applications_v2.py"
- Python38-tf2.x:
- python.version: '3.8'
+ Python37-onnx1.11-tf2.5:
+ python.version: '3.7'
ONNX_PATH: onnx==1.11.0
- INSTALL_KERAS:
+ INSTALL_KERAS:
UNINSTALL_KERAS: pip uninstall keras -y
INSTALL_TENSORFLOW: pip install tensorflow==2.5.0
- INSTALL_ORT: pip install onnxruntime==1.9.0
+ INSTALL_ORT: pip install onnxruntime==1.11.0
INSTALL_KERAS_RESNET: pip install keras-resnet
INSTALL_TRANSFORMERS: pip install transformers==3.4.0
INSTALL_NUMPY: pip install numpy==1.19.0
NIGHTLY_BUILD_TEST: python run_all_v2.py
+ Python37-onnx1.11-tf2.8:
+ python.version: '3.7'
+ ONNX_PATH: onnx==1.11.0
+ INSTALL_KERAS:
+ UNINSTALL_KERAS:
+ INSTALL_TENSORFLOW: pip install tensorflow==2.8.0
+ INSTALL_ORT: pip install onnxruntime==1.11.0
+ INSTALL_KERAS_RESNET: pip install keras-resnet
+ INSTALL_TRANSFORMERS: pip install transformers==3.4.0
+ INSTALL_NUMPY:
+ NIGHTLY_BUILD_TEST: python run_all_v2.py
+
+ Python310-onnx1.12-tf2.9:
+ python.version: '3.10'
+ ONNX_PATH: onnx==1.12.0
+ INSTALL_KERAS:
+ UNINSTALL_KERAS:
+ INSTALL_TENSORFLOW: pip install tensorflow==2.9.0
+ INSTALL_ORT: pip install onnxruntime==1.12.0
+ INSTALL_KERAS_RESNET: pip install keras-resnet
+ INSTALL_TRANSFORMERS: pip install transformers==4.12.0
+ INSTALL_NUMPY:
+ NIGHTLY_BUILD_TEST: python run_all_v2.py
+
steps:
- template: 'templates/keras2onnx_application_tests.yml'
parameters:
@@ -79,29 +67,6 @@ jobs:
vmImage: 'windows-2019'
strategy:
matrix:
- Python36-onnx1.5:
- python.version: '3.6'
- ONNX_PATH: onnx==1.5.0
- INSTALL_KERAS: pip install keras==2.2.4
- UNINSTALL_KERAS:
- INSTALL_TENSORFLOW: pip install tensorflow==1.15.0
- INSTALL_ORT: pip install onnxruntime==1.8.0
- INSTALL_KERAS_RESNET: pip install keras-resnet
- INSTALL_TRANSFORMERS:
- NIGHTLY_BUILD_TEST: python run_all_v2.py --exclude "test_keras_applications_v2.py"
-
- Python37-onnx1.6:
- python.version: '3.7'
- ONNX_PATH: onnx==1.6.0
- INSTALL_KERAS: pip install keras==2.3.1
- UNINSTALL_KERAS:
- INSTALL_TENSORFLOW: pip install tensorflow==1.15.0
- INSTALL_ORT: pip install onnxruntime==1.9.0
- INSTALL_KERAS_RESNET: pip install keras-resnet
- INSTALL_TRANSFORMERS:
- INSTALL_NUMPY: pip install numpy==1.19.0
- NIGHTLY_BUILD_TEST: python run_all_v2.py --exclude "test_keras_applications_v2.py"
-
Python37-onnx1.9:
python.version: '3.7'
ONNX_PATH: onnx==1.9.0
@@ -114,7 +79,7 @@ jobs:
INSTALL_NUMPY: pip install numpy==1.19.0
NIGHTLY_BUILD_TEST: python run_all_v2.py --exclude "test_keras_applications_v2.py"
- Python37-onnx1.11:
+ Python37-onnx1.11-tf1.15:
python.version: '3.7'
ONNX_PATH: onnx==1.11.0
INSTALL_KERAS: pip install keras==2.3.1
@@ -126,17 +91,29 @@ jobs:
INSTALL_NUMPY: pip install numpy==1.19.0
NIGHTLY_BUILD_TEST: python run_all_v2.py --exclude "test_keras_applications_v2.py"
- Python38-tf2.x:
+ Python38-onnx1.11-tf2.8:
python.version: '3.8'
ONNX_PATH: onnx==1.11.0
- INSTALL_KERAS:
- UNINSTALL_KERAS: pip uninstall keras -y
- INSTALL_TENSORFLOW: pip install tensorflow==2.5.0
- INSTALL_ORT: pip install onnxruntime==1.9.0
+ INSTALL_KERAS:
+ UNINSTALL_KERAS:
+ INSTALL_TENSORFLOW: pip install tensorflow==2.8.0
+ INSTALL_ORT: pip install onnxruntime==1.11.0
INSTALL_KERAS_RESNET: pip install keras-resnet
INSTALL_TRANSFORMERS: pip install transformers==3.4.0
- INSTALL_NUMPY: pip install numpy==1.19.0
- NIGHTLY_BUILD_TEST: python run_all_v2.py --exclude "test_keras_applications_v2.py"
+ INSTALL_NUMPY:
+ NIGHTLY_BUILD_TEST: python run_all_v2.py
+
+ Python310-onnx1.12-tf2.9:
+ python.version: '3.10'
+ ONNX_PATH: onnx==1.12.0
+ INSTALL_KERAS:
+ UNINSTALL_KERAS:
+ INSTALL_TENSORFLOW: pip install tensorflow==2.9.0
+ INSTALL_ORT: pip install onnxruntime==1.12.0
+ INSTALL_KERAS_RESNET: pip install keras-resnet
+ INSTALL_TRANSFORMERS: pip install transformers==4.12.0
+ INSTALL_NUMPY:
+ NIGHTLY_BUILD_TEST: python run_all_v2.py
steps:
- template: 'templates/keras2onnx_application_tests.yml'
diff --git a/ci_build/azure_pipelines/keras2onnx_unit_test.yml b/ci_build/azure_pipelines/keras2onnx_unit_test.yml
index 98ed7abc7..f9ad0879e 100644
--- a/ci_build/azure_pipelines/keras2onnx_unit_test.yml
+++ b/ci_build/azure_pipelines/keras2onnx_unit_test.yml
@@ -9,49 +9,35 @@ jobs:
matrix:
############ TF Keras Unit Tests ############
- Python36-tf1.15:
- python.version: '3.6'
+ Python37-tf1.15:
+ python.version: '3.7'
ONNX_PATH: onnx==1.10.2
TENSORFLOW_PATH: tensorflow==1.15.0
INSTALL_ORT: pip install onnxruntime==1.9.0
INSTALL_NUMPY: pip install numpy==1.19.0
- Python37-tf2.1:
- python.version: '3.7'
- ONNX_PATH: onnx==1.11.0
- TENSORFLOW_PATH: tensorflow-cpu==2.1.0
- INSTALL_ORT: pip install onnxruntime==1.11.0
- INSTALL_NUMPY: pip install numpy==1.19.0
-
- Python38-tf2.2:
+ Python38-tf2.5:
python.version: '3.8'
ONNX_PATH: onnx==1.11.0
- TENSORFLOW_PATH: tensorflow-cpu==2.2.0
+ TENSORFLOW_PATH: tensorflow-cpu==2.5.0
INSTALL_ORT: pip install onnxruntime==1.11.0
INSTALL_NUMPY: pip install numpy==1.19.0
- Python38-tf2.3:
- python.version: '3.8'
+ Python39-tf2.8:
+ python.version: '3.9'
ONNX_PATH: onnx==1.11.0
- TENSORFLOW_PATH: tensorflow-cpu==2.3.0
+ TENSORFLOW_PATH: tensorflow-cpu==2.8.0
INSTALL_ORT: pip install onnxruntime==1.11.0
- INSTALL_NUMPY: pip install numpy==1.19.0
+ INSTALL_NUMPY:
- Python38-tf2.5:
- python.version: '3.8'
- ONNX_PATH: onnx==1.11.0
- TENSORFLOW_PATH: tensorflow-cpu==2.5.0
- INSTALL_ORT: pip install onnxruntime==1.11.0
- INSTALL_NUMPY: pip install numpy==1.19.0
+ Python310-tf2.9:
+ python.version: '3.9'
+ ONNX_PATH: onnx==1.12.0
+ TENSORFLOW_PATH: tensorflow-cpu==2.9.0
+ INSTALL_ORT: pip install onnxruntime==1.12.0
+ INSTALL_NUMPY:
############ Pure Keras Unit Tests ############
- Keras-Py36-tf1.15.0:
- python.version: '3.6'
- ONNX_PATH: onnx==1.10.2
- KERAS: keras==2.2.5
- TENSORFLOW_PATH: tensorflow==1.15.0
- INSTALL_ORT: pip install onnxruntime==1.9.0
-
Keras-Py37-tf1.15.0:
python.version: '3.7'
ONNX_PATH: onnx==1.11.0
@@ -77,6 +63,14 @@ jobs:
INSTALL_ORT: pip install onnxruntime==1.11.0
INSTALL_NUMPY: pip install numpy==1.19.0
+ Keras-Py310-tf2.9.0:
+ python.version: '3.10'
+ ONNX_PATH: -i onnx==1.12.0
+ KERAS: keras==2.9.0
+ TENSORFLOW_PATH: tensorflow==2.9.0
+ INSTALL_ORT: pip install onnxruntime==1.11.0
+ INSTALL_NUMPY: pip install numpy==1.23.0
+
steps:
- template: 'templates/keras2onnx_unit_test.yml'
parameters:
@@ -88,43 +82,37 @@ jobs:
strategy:
matrix:
############ TF Keras Unit Tests ############
- Python36-tf-1.15:
- python.version: '3.6'
+ Python37-tf-1.15:
+ python.version: '3.7'
ONNX_PATH: onnx==1.10.2
TENSORFLOW_PATH: tensorflow==1.15.0
INSTALL_ORT: pip install onnxruntime==1.9.0
-
- Python37-tf2.1:
- python.version: '3.7'
- ONNX_PATH: onnx==1.11.0
- TENSORFLOW_PATH: tensorflow-cpu==2.1.0
- INSTALL_ORT: pip install onnxruntime==1.11.0
INSTALL_NUMPY: pip install numpy==1.19.0
- Python37-tf2.2:
+ Python37-tf2.5:
python.version: '3.7'
ONNX_PATH: onnx==1.11.0
- TENSORFLOW_PATH: tensorflow-cpu==2.2.0
+ TENSORFLOW_PATH: tensorflow-cpu==2.5.0
INSTALL_ORT: pip install onnxruntime==1.11.0
INSTALL_NUMPY: pip install numpy==1.19.0
- Python37-tf2.3:
- python.version: '3.7'
+ Python38-tf2.8:
+ python.version: '3.8'
ONNX_PATH: onnx==1.11.0
- TENSORFLOW_PATH: tensorflow-cpu==2.3.0
+ TENSORFLOW_PATH: tensorflow-cpu==2.8.0
INSTALL_ORT: pip install onnxruntime==1.11.0
- INSTALL_NUMPY: pip install numpy==1.19.0
+ INSTALL_NUMPY:
- Python37-tf2.5:
- python.version: '3.7'
- ONNX_PATH: onnx==1.11.0
- TENSORFLOW_PATH: tensorflow-cpu==2.5.0
- INSTALL_ORT: pip install onnxruntime==1.11.0
- INSTALL_NUMPY: pip install numpy==1.19.0
+ Python310-tf2.9:
+ python.version: '3.10'
+ ONNX_PATH: onnx==1.12.0
+ TENSORFLOW_PATH: tensorflow-cpu==2.9.0
+ INSTALL_ORT: pip install onnxruntime==1.12.0
+ INSTALL_NUMPY:
############ Pure Keras Unit Tests ############
- Keras-Py36-tf1.15.0:
- python.version: '3.6'
+ Keras-Py37-tf1.15.0:
+ python.version: '3.7'
ONNX_PATH: onnx==1.10.2
KERAS: keras==2.2.5
TENSORFLOW_PATH: tensorflow==1.15.0
@@ -146,6 +134,14 @@ jobs:
INSTALL_ORT: pip install onnxruntime==1.11.0
INSTALL_NUMPY: pip install numpy==1.19.0
+ Keras-Py310-tf2.9.0:
+ python.version: '3.10'
+ ONNX_PATH: onnx==1.12.0
+ KERAS: keras==2.9.0
+ TENSORFLOW_PATH: tensorflow==2.9.0
+ INSTALL_ORT: pip install onnxruntime==1.12.0
+ INSTALL_NUMPY:
+
steps:
- template: 'templates/keras2onnx_unit_test.yml'
parameters:
diff --git a/ci_build/azure_pipelines/onnxruntime_nightly_test.yml b/ci_build/azure_pipelines/onnxruntime_nightly_test.yml
index e556677a2..cc0bab8b7 100644
--- a/ci_build/azure_pipelines/onnxruntime_nightly_test.yml
+++ b/ci_build/azure_pipelines/onnxruntime_nightly_test.yml
@@ -19,19 +19,7 @@ stages:
- template: 'templates/job_generator.yml'
parameters:
platforms: ['linux', 'windows']
- python_versions: ['3.7', '3.6']
- tf_versions: ['1.13.1']
- onnx_opsets: ['']
- onnx_backends: {onnxruntime: ['nightly']}
- job:
- steps:
- - template: 'unit_test.yml'
- report_coverage: 'True'
-
- - template: 'templates/job_generator.yml'
- parameters:
- platforms: ['linux', 'windows']
- python_versions: [3.7', '3.6']
+ python_versions: ['3.7']
tf_versions: ['1.14.0']
onnx_opsets: ['']
onnx_backends: {onnxruntime: ['nightly']}
@@ -54,19 +42,19 @@ stages:
- template: 'templates/job_generator.yml'
parameters:
- platforms: ['linux']
- python_versions: ['3.7']
- tf_versions: ['2.4.1']
+ platforms: ['linux', 'windows']
+ python_versions: ['3.8']
+ tf_versions: ['2.7.3']
onnx_opsets: ['']
onnx_backends: {onnxruntime: ['nightly']}
job:
steps:
- template: 'unit_test.yml'
report_coverage: 'True'
-
+
- template: 'templates/job_generator.yml'
parameters:
- platforms: ['linux']
+ platforms: ['linux', 'windows']
python_versions: ['3.9']
tf_versions: ['2.8.0']
onnx_opsets: ['']
@@ -78,9 +66,9 @@ stages:
- template: 'templates/job_generator.yml'
parameters:
- platforms: ['linux']
- python_versions: ['3.9']
- tf_versions: ['2.6.2']
+ platforms: ['linux', 'windows']
+ python_versions: ['3.10']
+ tf_versions: ['2.9.1']
onnx_opsets: ['']
onnx_backends: {onnxruntime: ['nightly']}
job:
@@ -88,18 +76,6 @@ stages:
- template: 'unit_test.yml'
report_coverage: 'True'
- - template: 'templates/job_generator.yml'
- parameters:
- platforms: ['windows']
- python_versions: ['3.7']
- tf_versions: ['2.5.0']
- onnx_opsets: ['']
- onnx_backends: {onnxruntime: ['nightly']}
- job:
- steps:
- - template: 'unit_test.yml'
- report_coverage: 'True'
-
- template: 'templates/combine_test_coverage.yml'
schedules:
diff --git a/ci_build/azure_pipelines/pretrained_model_test-matrix.yml b/ci_build/azure_pipelines/pretrained_model_test-matrix.yml
index 1d712ddf5..aaccbad31 100755
--- a/ci_build/azure_pipelines/pretrained_model_test-matrix.yml
+++ b/ci_build/azure_pipelines/pretrained_model_test-matrix.yml
@@ -4,8 +4,8 @@ jobs:
- template: 'templates/job_generator.yml'
parameters:
platforms: ['linux', 'windows']
- python_versions: ['3.6']
- tf_versions: ['1.13.1', '1.12.3']
+ python_versions: ['3.7']
+ tf_versions: ['1.14.0']
job:
steps:
- template: 'pretrained_model_test.yml'
@@ -13,8 +13,8 @@ jobs:
- template: 'templates/job_generator.yml'
parameters:
platforms: ['linux', 'windows']
- python_versions: ['3.7', '3.6']
- tf_versions: ['1.14.0']
+ python_versions: ['3.7']
+ tf_versions: ['1.15.2','2.1.0']
job:
steps:
- template: 'pretrained_model_test.yml'
@@ -22,8 +22,8 @@ jobs:
- template: 'templates/job_generator.yml'
parameters:
platforms: ['linux', 'windows']
- python_versions: ['3.7']
- tf_versions: ['1.15.2','2.1.0']
+ python_versions: ['3.8']
+ tf_versions: ['2.7.0']
job:
steps:
- template: 'pretrained_model_test.yml'
@@ -31,8 +31,8 @@ jobs:
- template: 'templates/job_generator.yml'
parameters:
platforms: ['linux', 'windows']
- python_versions: ['3.9']
- tf_versions: ['2.6.2']
+ python_versions: ['3.8']
+ tf_versions: ['2.8.0']
job:
steps:
- template: 'pretrained_model_test.yml'
@@ -40,8 +40,8 @@ jobs:
- template: 'templates/job_generator.yml'
parameters:
platforms: ['linux', 'windows']
- python_versions: ['3.8']
- tf_versions: ['2.7.0']
+ python_versions: ['3.9']
+ tf_versions: ['2.9.1']
job:
steps:
- template: 'pretrained_model_test.yml'
@@ -49,8 +49,8 @@ jobs:
- template: 'templates/job_generator.yml'
parameters:
platforms: ['linux', 'windows']
- python_versions: ['3.8']
- tf_versions: ['2.8.0']
+ python_versions: ['3.10']
+ tf_versions: ['2.9.1']
job:
steps:
- template: 'pretrained_model_test.yml'
diff --git a/ci_build/azure_pipelines/pretrained_model_test.yml b/ci_build/azure_pipelines/pretrained_model_test.yml
index 0fe9900f5..9183f6ad0 100644
--- a/ci_build/azure_pipelines/pretrained_model_test.yml
+++ b/ci_build/azure_pipelines/pretrained_model_test.yml
@@ -15,18 +15,18 @@ jobs:
- template: 'templates/job_generator.yml'
parameters:
- # 2.7, tf
+ # 2.8, tf
python_versions: ['3.7']
- tf_versions: ['1.15.5','2.7.0']
+ tf_versions: ['1.15.5','2.8.0']
job:
steps:
- template: 'pretrained_model_test.yml'
- template: 'templates/job_generator.yml'
parameters:
- # 2.8, tf
- python_versions: ['3.9']
- tf_versions: ['2.8.0']
+ # 2.10, tf
+ python_versions: ['3.10']
+ tf_versions: ['2.9.1']
job:
steps:
- template: 'pretrained_model_test.yml'
diff --git a/ci_build/azure_pipelines/templates/combine_test_coverage.yml b/ci_build/azure_pipelines/templates/combine_test_coverage.yml
index 3639ad472..f31b25c3b 100644
--- a/ci_build/azure_pipelines/templates/combine_test_coverage.yml
+++ b/ci_build/azure_pipelines/templates/combine_test_coverage.yml
@@ -24,7 +24,7 @@ stages:
inputs:
createCustomEnvironment: 'true'
environmentName: 'tf2onnx'
- packageSpecs: 'python=3.6'
+ packageSpecs: 'python=3.7'
updateConda: 'false'
- bash: |
diff --git a/ci_build/azure_pipelines/templates/job_generator.yml b/ci_build/azure_pipelines/templates/job_generator.yml
index 79851632e..dcbc08e79 100644
--- a/ci_build/azure_pipelines/templates/job_generator.yml
+++ b/ci_build/azure_pipelines/templates/job_generator.yml
@@ -5,8 +5,8 @@ parameters:
python_versions: ['3.7']
tf_versions: ['']
onnx_versions: ['']
- onnx_opsets: ['15', '14', '13', '12', '11', '10', '9']
- onnx_backends: {onnxruntime: ['1.10.0']}
+ onnx_opsets: ['17', '16', '15', '14', '13']
+ onnx_backends: {onnxruntime: ['1.12.0']}
job: {}
run_setup: 'True'
report_coverage: 'False'
diff --git a/ci_build/azure_pipelines/templates/keras2onnx_application_tests.yml b/ci_build/azure_pipelines/templates/keras2onnx_application_tests.yml
index 8d5530963..23b2e6585 100644
--- a/ci_build/azure_pipelines/templates/keras2onnx_application_tests.yml
+++ b/ci_build/azure_pipelines/templates/keras2onnx_application_tests.yml
@@ -19,6 +19,8 @@ steps:
python -m pip install --upgrade pip
conda config --set always_yes yes --set changeps1 no
pip install $(ONNX_PATH)
+ pip uninstall -y protobuf
+ pip install "protobuf<4.21.0"
pip install h5py==2.9.0
pip install parameterized
$(INSTALL_TENSORFLOW)
@@ -81,7 +83,7 @@ steps:
echo Test numpy installation... && python -c "import numpy"
pip install %ONNX_PATH%
pip uninstall -y protobuf
- pip install protobuf
+ pip install "protobuf<4.21.0"
pip install h5py==2.9.0
pip install parameterized
%INSTALL_TENSORFLOW%
diff --git a/ci_build/azure_pipelines/templates/keras2onnx_unit_test.yml b/ci_build/azure_pipelines/templates/keras2onnx_unit_test.yml
index 43f1d7688..00ac1d739 100644
--- a/ci_build/azure_pipelines/templates/keras2onnx_unit_test.yml
+++ b/ci_build/azure_pipelines/templates/keras2onnx_unit_test.yml
@@ -19,6 +19,8 @@ steps:
python -m pip install --upgrade pip
conda config --set always_yes yes --set changeps1 no
pip install $(ONNX_PATH)
+ pip uninstall -y protobuf
+ pip install "protobuf<4.21.0"
pip install h5py==2.9.0
pip install parameterized
pip install $(TENSORFLOW_PATH)
@@ -67,11 +69,11 @@ steps:
- script: |
call activate py$(python.version)
- python -m pip install --upgrade pip numpy==1.19
+ python -m pip install --upgrade pip numpy
echo Test numpy installation... && python -c "import numpy"
pip install %ONNX_PATH%
pip uninstall -y protobuf
- pip install protobuf
+ pip install "protobuf<4.21.0"
pip install h5py==2.9.0
pip install parameterized
pip install %TENSORFLOW_PATH%
diff --git a/ci_build/azure_pipelines/templates/setup.yml b/ci_build/azure_pipelines/templates/setup.yml
index 0f9e277ec..d5494e7b5 100644
--- a/ci_build/azure_pipelines/templates/setup.yml
+++ b/ci_build/azure_pipelines/templates/setup.yml
@@ -3,15 +3,29 @@
steps:
- bash: |
set -ex
- pip install pytest pytest-cov pytest-runner coverage graphviz requests pyyaml pillow pandas parameterized
- pip install $(CI_PIP_TF_NAME) $(CI_PIP_ONNX_NAME) $(CI_PIP_ONNX_BACKEND_NAME)
+ pip install pytest pytest-cov pytest-runner coverage graphviz requests pyyaml pillow pandas parameterized sympy coloredlogs flatbuffers
+ pip install $(CI_PIP_TF_NAME) $(CI_PIP_ONNX_NAME)
+ # Protobuf 3.20 results in linker errors on Windows in TF.
+ # Protobuf 4.0 is binary incompatible with what C++ TF uses.
+ # https://github.com/tensorflow/tensorflow/blob/c3337c73306b2b859d82fe130912f18e6a1c5c23/tensorflow/tools/pip_package/setup.py#L88
+ pip uninstall -y protobuf
+ pip install "protobuf<3.20.0"
+
+ # TF < 2.7 reuires numpy <= 1.19, but onnxruntime >= 1.11 requires numpy >= 1.21
+ if [[ $CI_TF_VERSION < 2.7 ]] && [[ $CI_ONNX_BACKEND == "onnxruntime" ]] ;
+ then
+ pip install $(CI_PIP_ONNX_BACKEND_NAME) numpy --no-deps -U
+ else
+ pip install $(CI_PIP_ONNX_BACKEND_NAME)
+ fi
# TF 1.10 requires numpy <=1.14.5 and >=1.13.3, but onnxruntime 0.2.1 does not work with numpy <= 1.14.5
# Upgrade numpy only within constraints from other packages if any.
if [[ $CI_TF_VERSION == 1.10* ]] && [[ $CI_ONNX_BACKEND == "onnxruntime" ]] ;
then
pip install $(CI_PIP_ONNX_NAME) $(CI_PIP_ONNX_BACKEND_NAME) numpy --no-deps -U
fi
+
if [[ $CI_ONNXRUNTIME_NIGHTLY == "true" ]] ;
then
pip uninstall -y onnxruntime
@@ -22,13 +36,18 @@ steps:
if [[ $CI_SKIP_TFJS_TESTS == "False" ]] ;
then
- pip install tensorflowjs
+ pip install tensorflowjs==3.18.0
npm install @tensorflow/tfjs
fi
if [[ $CI_TF_VERSION == 2.* ]] ;
then
- pip install onnxruntime-extensions==0.3.1
+ # onnxruntime-extensions is not supported Python 3.10 so far.
+ # https://github.com/microsoft/onnxruntime-extensions/issues/273
+ if [[ $CI_PYTHON_VERSION != 3.10 ]] ;
+ then
+ pip install onnxruntime-extensions==0.3.1
+ fi
if [[ $CI_TF_VERSION == 2.3* ]] ;
then
pip install tensorflow-text==${CI_TF_VERSION}
@@ -53,6 +72,12 @@ steps:
then
pip install "tensorflow-text>=2.8,<2.9"
fi
+ if [[ $CI_TF_VERSION == 2.9* ]] ;
+ then
+ pip install "tensorflow-text>=2.9,<2.10"
+ else
+ pip install tensorflow-text
+ fi
fi
python setup.py install
diff --git a/ci_build/azure_pipelines/templates/unit_test.yml b/ci_build/azure_pipelines/templates/unit_test.yml
index 8dd67e3e6..9eb444665 100644
--- a/ci_build/azure_pipelines/templates/unit_test.yml
+++ b/ci_build/azure_pipelines/templates/unit_test.yml
@@ -1,7 +1,7 @@
# Run unit test
parameters:
- onnx_opsets: ['15', '14', '13', '12', '11', '10', '9', '8']
+ onnx_opsets: ['17', '16', '15', '14', '13']
skip_tflite_tests: 'True'
skip_tfjs_tests: 'True'
skip_tf_tests: 'False'
diff --git a/ci_build/azure_pipelines/trimmed_keras2onnx_application_tests.yml b/ci_build/azure_pipelines/trimmed_keras2onnx_application_tests.yml
index 809dfc710..ac14bd48a 100644
--- a/ci_build/azure_pipelines/trimmed_keras2onnx_application_tests.yml
+++ b/ci_build/azure_pipelines/trimmed_keras2onnx_application_tests.yml
@@ -8,28 +8,28 @@ jobs:
vmImage: 'ubuntu-latest'
strategy:
matrix:
- Python36-onnx1.10:
- python.version: '3.6'
- ONNX_PATH: onnx==1.10.2
+ Python37-onnx1.10:
+ python.version: '3.7'
+ ONNX_PATH: onnx==1.11.0
INSTALL_KERAS: pip install keras==2.2.4
UNINSTALL_KERAS:
INSTALL_TENSORFLOW: pip install tensorflow==1.15.0
- INSTALL_ORT: pip install onnxruntime==1.9.0
+ INSTALL_ORT: pip install onnxruntime==1.11.0
INSTALL_KERAS_RESNET: pip install keras-resnet
INSTALL_TRANSFORMERS:
INSTALL_NUMPY: pip install numpy==1.19.0
NIGHTLY_BUILD_TEST: python run_all.py --exclude "test_keras_applications_v2.py"
- Python38-tf2:
- python.version: '3.8'
- ONNX_PATH: onnx==1.11.0
+ Python310-tf2.x:
+ python.version: '3.10'
+ ONNX_PATH: onnx==1.12.0
INSTALL_KERAS:
- UNINSTALL_KERAS: pip uninstall keras -y
- INSTALL_TENSORFLOW: pip install tensorflow==2.5.0
- INSTALL_ORT: pip install onnxruntime==1.11.0
+ UNINSTALL_KERAS:
+ INSTALL_TENSORFLOW: pip install tensorflow==2.9.1
+ INSTALL_ORT: pip install onnxruntime==1.12.0
INSTALL_KERAS_RESNET: pip install keras-resnet
- INSTALL_TRANSFORMERS: pip install transformers==3.4.0
- INSTALL_NUMPY: pip install numpy==1.19.0
+ INSTALL_TRANSFORMERS: pip install transformers==4.2.0
+ INSTALL_NUMPY:
NIGHTLY_BUILD_TEST: python run_all_v2.py
steps:
diff --git a/ci_build/azure_pipelines/trimmed_keras2onnx_unit_test.yml b/ci_build/azure_pipelines/trimmed_keras2onnx_unit_test.yml
index 45228b80c..aa42ece9a 100644
--- a/ci_build/azure_pipelines/trimmed_keras2onnx_unit_test.yml
+++ b/ci_build/azure_pipelines/trimmed_keras2onnx_unit_test.yml
@@ -9,18 +9,19 @@ jobs:
matrix:
############ TF Keras Unit Tests ############
- Python36-tf1.15:
- python.version: '3.6'
- ONNX_PATH: onnx==1.10.2
+ Python37-tf1.15:
+ python.version: '3.7'
+ ONNX_PATH: onnx==1.11.0
TENSORFLOW_PATH: tensorflow==1.15.0
- INSTALL_ORT: pip install onnxruntime==1.9.0
+ INSTALL_ORT: pip install onnxruntime==1.11.0
+ INSTALL_NUMPY: pip install numpy==1.19.0
- Python38-tf2.5:
+ Python38-tf2.9:
python.version: '3.8'
ONNX_PATH: onnx==1.11.0
- TENSORFLOW_PATH: tensorflow-cpu==2.5.0
+ TENSORFLOW_PATH: tensorflow-cpu==2.9.0
INSTALL_ORT: pip install onnxruntime==1.11.0
- INSTALL_NUMPY: pip install numpy==1.19.0
+ INSTALL_NUMPY:
############ Pure Keras Unit Tests ############
Keras-Py37-tf1.15.0:
@@ -40,6 +41,13 @@ jobs:
INSTALL_ORT: pip install onnxruntime==1.11.0
INSTALL_NUMPY: pip install numpy==1.19.0
+ Keras-Py310-tf2.9.0:
+ python.version: '3.10'
+ ONNX_PATH: -i onnx==1.12.0
+ TENSORFLOW_PATH: tensorflow==2.9.0
+ INSTALL_ORT: pip install onnxruntime==1.12.0
+ INSTALL_NUMPY:
+
steps:
- template: 'templates/keras2onnx_unit_test.yml'
parameters:
@@ -51,19 +59,27 @@ jobs:
strategy:
matrix:
############ TF Keras Unit Tests ############
- Python36-tf-1.15:
- python.version: '3.6'
- ONNX_PATH: onnx==1.10.2
- TENSORFLOW_PATH: tensorflow==1.15.0
- INSTALL_ORT: pip install onnxruntime==1.9.0
-
- Python37-tf2.3:
+ Python37-tf-1.15:
python.version: '3.7'
ONNX_PATH: onnx==1.11.0
- TENSORFLOW_PATH: tensorflow-cpu==2.3.0
+ TENSORFLOW_PATH: tensorflow==1.15.0
INSTALL_ORT: pip install onnxruntime==1.11.0
INSTALL_NUMPY: pip install numpy==1.19.0
+ Python38-tf2.9:
+ python.version: '3.8'
+ ONNX_PATH: onnx==1.11.0
+ TENSORFLOW_PATH: tensorflow-cpu==2.9.0
+ INSTALL_ORT: pip install onnxruntime==1.11.0
+ INSTALL_NUMPY:
+
+ Python310-tf2.9:
+ python.version: '3.10'
+ ONNX_PATH: onnx==1.12.0
+ TENSORFLOW_PATH: tensorflow-cpu==2.9.0
+ INSTALL_ORT: pip install onnxruntime==1.12.0
+ INSTALL_NUMPY:
+
############ Pure Keras Unit Tests ############
Keras-Py37-tf2.2.0:
python.version: '3.7'
diff --git a/ci_build/azure_pipelines/unit_test-matrix.yml b/ci_build/azure_pipelines/unit_test-matrix.yml
index 19f5c29c3..664d33169 100644
--- a/ci_build/azure_pipelines/unit_test-matrix.yml
+++ b/ci_build/azure_pipelines/unit_test-matrix.yml
@@ -6,19 +6,8 @@ stages:
- template: 'templates/job_generator.yml'
parameters:
platforms: ['linux', 'windows']
- python_versions: ['3.6']
- tf_versions: ['1.12.3']
- onnx_opsets: ['']
- job:
- steps:
- - template: 'unit_test.yml'
- report_coverage: 'True'
-
- - template: 'templates/job_generator.yml'
- parameters:
- platforms: ['linux', 'windows']
- python_versions: ['3.7', '3.6']
- tf_versions: ['1.14.0']
+ python_versions: ['3.7']
+ tf_versions: ['1.14.0', '1.15.2']
onnx_opsets: ['']
job:
steps:
@@ -28,30 +17,30 @@ stages:
- template: 'templates/job_generator.yml'
parameters:
platforms: ['linux', 'windows']
- python_versions: ['3.7']
- tf_versions: ['1.15.2','2.1.0']
+ python_versions: ['3.8']
+ tf_versions: ['2.5.0']
onnx_opsets: ['']
job:
steps:
- template: 'unit_test.yml'
report_coverage: 'True'
-
+
- template: 'templates/job_generator.yml'
parameters:
platforms: ['linux', 'windows']
python_versions: ['3.8']
- tf_versions: ['2.5.0']
+ tf_versions: ['2.6.2']
onnx_opsets: ['']
job:
steps:
- template: 'unit_test.yml'
report_coverage: 'True'
-
+
- template: 'templates/job_generator.yml'
parameters:
platforms: ['linux', 'windows']
- python_versions: ['3.8']
- tf_versions: ['2.6.2']
+ python_versions: ['3.9', '3.10']
+ tf_versions: ['2.8.0']
onnx_opsets: ['']
job:
steps:
@@ -61,8 +50,8 @@ stages:
- template: 'templates/job_generator.yml'
parameters:
platforms: ['linux', 'windows']
- python_versions: ['3.8']
- tf_versions: ['2.8.0']
+ python_versions: ['3.9']
+ tf_versions: ['2.9.0']
onnx_opsets: ['']
job:
steps:
diff --git a/ci_build/azure_pipelines/unit_test.yml b/ci_build/azure_pipelines/unit_test.yml
index 0cc9d8a0a..94b3e3e72 100644
--- a/ci_build/azure_pipelines/unit_test.yml
+++ b/ci_build/azure_pipelines/unit_test.yml
@@ -79,9 +79,9 @@ stages:
- template: 'templates/job_generator.yml'
parameters:
- # TFJS tf 2.6
- python_versions: ['3.9']
- tf_versions: ['2.6.2']
+ # TFJS tf 2.9
+ python_versions: ['3.10']
+ tf_versions: ['2.9.1']
onnx_opsets: ['']
skip_tfjs_tests: 'False'
skip_tf_tests: 'True'
@@ -92,9 +92,9 @@ stages:
- template: 'templates/job_generator.yml'
parameters:
- # TFLite tf 2.6
- python_versions: ['3.8']
- tf_versions: ['2.6.2']
+ # TFLite tf 2.9
+ python_versions: ['3.10']
+ tf_versions: ['2.9.1']
onnx_opsets: ['']
skip_tflite_tests: 'False'
skip_tf_tests: 'True'
@@ -105,9 +105,9 @@ stages:
- template: 'templates/job_generator.yml'
parameters:
- # tf 2.6
- python_versions: ['3.8']
- tf_versions: ['2.6.2']
+ # tf 2.9
+ python_versions: ['3.10']
+ tf_versions: ['2.9.1']
onnx_opsets: ['']
job:
steps:
@@ -127,10 +127,9 @@ stages:
- template: 'templates/job_generator.yml'
parameters:
- # tf 1.12
- python_versions: [3.6']
- tf_versions: ['1.12.3']
- onnx_opsets: ['9']
+ platforms: ['windows']
+ tf_versions: ['1.14.0']
+ onnx_opsets: ['14']
job:
steps:
- template: 'unit_test.yml'
@@ -138,9 +137,10 @@ stages:
- template: 'templates/job_generator.yml'
parameters:
+ python_versions: ['3.7']
platforms: ['windows']
- tf_versions: ['1.14.0']
- onnx_opsets: ['14']
+ tf_versions: ['2.4.1']
+ onnx_opsets: ['13']
job:
steps:
- template: 'unit_test.yml'
@@ -148,10 +148,32 @@ stages:
- template: 'templates/job_generator.yml'
parameters:
- python_versions: ['3.7']
+ python_versions: ['3.8']
platforms: ['windows']
- tf_versions: ['2.4.1']
- onnx_opsets: ['13']
+ tf_versions: ['2.8.1']
+ onnx_opsets: ['15']
+ job:
+ steps:
+ - template: 'unit_test.yml'
+ report_coverage: 'True'
+
+ - template: 'templates/job_generator.yml'
+ parameters:
+ python_versions: ['3.9']
+ platforms: ['windows']
+ tf_versions: ['2.9.1']
+ onnx_opsets: ['16']
+ job:
+ steps:
+ - template: 'unit_test.yml'
+ report_coverage: 'True'
+
+ - template: 'templates/job_generator.yml'
+ parameters:
+ python_versions: ['3.10']
+ platforms: ['windows']
+ tf_versions: ['2.9.1']
+ onnx_opsets: ['17']
job:
steps:
- template: 'unit_test.yml'
diff --git a/setup.py b/setup.py
index ae654b152..2719e2207 100644
--- a/setup.py
+++ b/setup.py
@@ -95,8 +95,8 @@ def run(self):
'Topic :: Software Development :: Libraries',
'Topic :: Software Development :: Libraries :: Python Modules',
'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
- 'Programming Language :: Python :: 3.9']
+ 'Programming Language :: Python :: 3.9',
+ 'Programming Language :: Python :: 3.10']
)
diff --git a/support_status.md b/support_status.md
index e66e6fd00..f4f736539 100644
--- a/support_status.md
+++ b/support_status.md
@@ -4,267 +4,269 @@
### Domain: "" (default domain)
| Tensorflow Op | Convertible to ONNX Op Versions |
| ------------- | ------------------------------- |
-| Abs | 1 ~ 15 |
-| Acos | 7 ~ 15 |
-| Acosh | 9 ~ 15 |
-| Add | 1 ~ 15 |
-| AddN | 6 ~ 15 |
-| AddV2 | 1 ~ 15 |
-| AdjustContrastv2 | 1 ~ 15 |
-| AdjustHue | 11 ~ 15 |
-| AdjustSaturation | 11 ~ 15 |
-| All | 6 ~ 15 |
-| Any | 6 ~ 15 |
-| ArgMax | 1 ~ 15 |
-| ArgMin | 1 ~ 15 |
-| AsString | 9 ~ 15 |
-| Asin | 7 ~ 15 |
-| Asinh | 9 ~ 15 |
-| Atan | 7 ~ 15 |
-| Atan2 | 9 ~ 15 |
-| Atanh | 9 ~ 15 |
-| AvgPool | 1 ~ 15 |
-| AvgPool3D | 1 ~ 15 |
-| BatchMatMul | 1 ~ 15 |
-| BatchMatMulV2 | 1 ~ 15 |
-| BatchToSpaceND | 1 ~ 15 |
-| BiasAdd | 1 ~ 15 |
-| BiasAddV1 | 1 ~ 15 |
-| Bincount | 11 ~ 15 |
-| BroadcastTo | 8 ~ 15 |
-| CTCGreedyDecoder | 11 ~ 15 |
-| Cast | 1 ~ 15 |
-| Ceil | 1 ~ 15 |
-| CheckNumerics | 1 ~ 15 |
-| ClipByValue | 8 ~ 15 |
-| CombinedNonMaxSuppression | 12 ~ 15 |
-| ComplexAbs | 1 ~ 15 |
-| Concat | 1 ~ 15 |
-| ConcatV2 | 1 ~ 15 |
-| Const | 1 ~ 15 |
-| ConstV2 | 1 ~ 15 |
-| Conv1D | 1 ~ 15 |
-| Conv2D | 1 ~ 15 |
-| Conv2DBackpropInput | 1 ~ 15 |
-| Conv3D | 1 ~ 15 |
-| Conv3DBackpropInputV2 | 1 ~ 15 |
-| Cos | 7 ~ 15 |
-| Cosh | 9 ~ 15 |
-| CropAndResize | 10 ~ 15 |
-| CudnnRNN | 10 ~ 15 |
-| Cumsum | 11 ~ 15 |
-| DenseBincount | 11 ~ 15 |
-| DenseToDenseSetOperation | 11 ~ 15 |
-| DepthToSpace | 1 ~ 15 |
-| DepthwiseConv2d | 1 ~ 15 |
-| DepthwiseConv2dNative | 1 ~ 15 |
-| Div | 1 ~ 15 |
-| DivNoNan | 9 ~ 15 |
-| Dropout | 1 ~ 15 |
-| DynamicPartition | 9 ~ 15 |
-| DynamicStitch | 10 ~ 15 |
-| Einsum | 12 ~ 15 |
-| Elu | 1 ~ 15 |
-| EnsureShape | 1 ~ 15 |
-| Equal | 1 ~ 15 |
-| Erf | 1 ~ 15 |
-| Exp | 1 ~ 15 |
-| ExpandDims | 1 ~ 15 |
-| FFT | 1 ~ 15 |
-| FIFOQueueV2 | 8 ~ 15 |
-| FakeQuantWithMinMaxArgs | 10 ~ 15 |
-| FakeQuantWithMinMaxVars | 10 ~ 15 |
-| Fill | 7 ~ 15 |
-| Flatten | 1 ~ 15 |
-| Floor | 1 ~ 15 |
-| FloorDiv | 6 ~ 15 |
-| FloorMod | 7 ~ 15 |
-| FusedBatchNorm | 6 ~ 15 |
-| FusedBatchNormV2 | 6 ~ 15 |
-| FusedBatchNormV3 | 6 ~ 15 |
-| Gather | 1 ~ 15 |
-| GatherNd | 1 ~ 15 |
-| GatherV2 | 1 ~ 15 |
-| Greater | 1 ~ 15 |
-| GreaterEqual | 7 ~ 15 |
-| HardSwish | 14 ~ 15 |
-| HashTableV2 | 8 ~ 15 |
-| Identity | 1 ~ 15 |
-| IdentityN | 1 ~ 15 |
-| If | 1 ~ 15 |
-| InvertPermutation | 11 ~ 15 |
-| IsFinite | 10 ~ 15 |
-| IsInf | 10 ~ 15 |
-| IsNan | 9 ~ 15 |
-| IteratorGetNext | 8 ~ 15 |
-| IteratorV2 | 8 ~ 15 |
-| LRN | 1 ~ 15 |
-| LSTMBlockCell | 1 ~ 15 |
-| LeakyRelu | 1 ~ 15 |
-| LeftShift | 11 ~ 15 |
-| Less | 1 ~ 15 |
-| LessEqual | 7 ~ 15 |
-| Log | 1 ~ 15 |
-| LogSoftmax | 1 ~ 15 |
-| LogicalAnd | 1 ~ 15 |
-| LogicalNot | 1 ~ 15 |
-| LogicalOr | 1 ~ 15 |
-| LookupTableFindV2 | 8 ~ 15 |
-| LookupTableSizeV2 | 1 ~ 15 |
-| Loop | 7 ~ 15 |
-| MatMul | 1 ~ 15 |
-| MatrixBandPart | 7 ~ 15 |
-| MatrixDeterminant | 11 ~ 15 |
-| MatrixDiag | 12 ~ 15 |
-| MatrixDiagPart | 11 ~ 15 |
-| MatrixDiagPartV2 | 11 ~ 15 |
-| MatrixDiagPartV3 | 11 ~ 15 |
-| MatrixDiagV2 | 12 ~ 15 |
-| MatrixDiagV3 | 12 ~ 15 |
-| MatrixSetDiagV3 | 12 ~ 15 |
-| Max | 1 ~ 15 |
-| MaxPool | 1 ~ 15 |
-| MaxPool3D | 1 ~ 15 |
-| MaxPoolV2 | 1 ~ 15 |
-| MaxPoolWithArgmax | 8 ~ 15 |
-| Maximum | 1 ~ 15 |
-| Mean | 1 ~ 15 |
-| Min | 1 ~ 15 |
-| Minimum | 1 ~ 15 |
-| MirrorPad | 1 ~ 15 |
-| Mul | 1 ~ 15 |
-| Multinomial | 7 ~ 15 |
-| Neg | 1 ~ 15 |
-| NoOp | 1 ~ 15 |
-| NonMaxSuppressionV2 | 10 ~ 15 |
-| NonMaxSuppressionV3 | 10 ~ 15 |
-| NonMaxSuppressionV4 | 10 ~ 15 |
-| NonMaxSuppressionV5 | 10 ~ 15 |
-| NotEqual | 1 ~ 15 |
-| OneHot | 1 ~ 15 |
-| Pack | 1 ~ 15 |
-| Pad | 1 ~ 15 |
-| PadV2 | 1 ~ 15 |
-| ParallelDynamicStitch | 10 ~ 15 |
-| Placeholder | 1 ~ 15 |
-| PlaceholderV2 | 1 ~ 15 |
-| PlaceholderWithDefault | 1 ~ 15 |
-| Pow | 1 ~ 15 |
-| Prelu | 1 ~ 15 |
-| Prod | 1 ~ 15 |
-| QueueDequeueManyV2 | 8 ~ 15 |
-| QueueDequeueUpToV2 | 8 ~ 15 |
-| QueueDequeueV2 | 8 ~ 15 |
-| RFFT | 1 ~ 15 |
-| RFFT2D | 1 ~ 15 |
-| RaggedGather | 11 ~ 15 |
-| RaggedRange | 11 ~ 15 |
-| RaggedTensorFromVariant | 13 ~ 15 |
-| RaggedTensorToSparse | 11 ~ 15 |
-| RaggedTensorToTensor | 11 ~ 15 |
-| RaggedTensorToVariant | 13 ~ 15 |
-| RandomNormal | 1 ~ 15 |
-| RandomNormalLike | 1 ~ 15 |
-| RandomShuffle | 10 ~ 15 |
-| RandomStandardNormal | 1 ~ 15 |
-| RandomUniform | 1 ~ 15 |
-| RandomUniformInt | 1 ~ 15 |
-| RandomUniformLike | 1 ~ 15 |
-| Range | 7 ~ 15 |
-| RealDiv | 1 ~ 15 |
-| Reciprocal | 1 ~ 15 |
-| Relu | 1 ~ 15 |
-| Relu6 | 1 ~ 15 |
-| Reshape | 1 ~ 15 |
-| ResizeBicubic | 7 ~ 15 |
-| ResizeBilinear | 7 ~ 15 |
-| ResizeNearestNeighbor | 7 ~ 15 |
-| ReverseSequence | 8 ~ 15 (Except 9) |
-| ReverseV2 | 10 ~ 15 |
-| RightShift | 11 ~ 15 |
-| Roll | 10 ~ 15 |
-| Round | 1 ~ 15 |
-| Rsqrt | 1 ~ 15 |
-| SampleDistortedBoundingBox | 9 ~ 15 |
-| SampleDistortedBoundingBoxV2 | 9 ~ 15 |
-| Scan | 7 ~ 15 |
-| ScatterNd | 11 ~ 15 |
-| SegmentMax | 11 ~ 15 |
-| SegmentMean | 11 ~ 15 |
-| SegmentMin | 11 ~ 15 |
-| SegmentProd | 11 ~ 15 |
-| SegmentSum | 11 ~ 15 |
-| Select | 7 ~ 15 |
-| SelectV2 | 7 ~ 15 |
-| Selu | 1 ~ 15 |
-| Shape | 1 ~ 15 |
-| Sigmoid | 1 ~ 15 |
-| Sign | 1 ~ 15 |
-| Sin | 7 ~ 15 |
-| Sinh | 9 ~ 15 |
-| Size | 1 ~ 15 |
-| Slice | 1 ~ 15 |
-| Softmax | 1 ~ 15 |
-| SoftmaxCrossEntropyWithLogits | 7 ~ 15 |
-| Softplus | 1 ~ 15 |
-| Softsign | 1 ~ 15 |
-| SpaceToBatchND | 1 ~ 15 |
-| SpaceToDepth | 1 ~ 15 |
-| SparseFillEmptyRows | 11 ~ 15 |
-| SparseReshape | 11 ~ 15 |
-| SparseSegmentMean | 11 ~ 15 |
-| SparseSegmentMeanWithNumSegments | 11 ~ 15 |
-| SparseSegmentSqrtN | 11 ~ 15 |
-| SparseSegmentSqrtNWithNumSegments | 11 ~ 15 |
-| SparseSegmentSum | 11 ~ 15 |
-| SparseSegmentSumWithNumSegments | 11 ~ 15 |
-| SparseSoftmaxCrossEntropyWithLogits | 7 ~ 15 |
-| SparseToDense | 11 ~ 15 |
-| Split | 1 ~ 15 |
-| SplitV | 1 ~ 15 |
-| Sqrt | 1 ~ 15 |
-| Square | 1 ~ 15 |
-| SquaredDifference | 1 ~ 15 |
-| SquaredDistance | 12 ~ 15 |
-| Squeeze | 1 ~ 15 |
-| StatelessIf | 1 ~ 15 |
-| StatelessWhile | 7 ~ 15 |
-| StopGradient | 1 ~ 15 |
-| StridedSlice | 1 ~ 15 |
-| StringLower | 10 ~ 15 |
-| StringToNumber | 9 ~ 15 |
-| StringUpper | 10 ~ 15 |
-| Sub | 1 ~ 15 |
-| Sum | 1 ~ 15 |
-| TFL_CONCATENATION | 1 ~ 15 |
-| TFL_DEQUANTIZE | 1 ~ 15 |
-| TFL_PRELU | 7 ~ 15 |
-| TFL_QUANTIZE | 1 ~ 15 |
-| TFL_TFLite_Detection_PostProcess | 11 ~ 15 |
-| TFL_WHILE | 7 ~ 15 |
-| Tan | 7 ~ 15 |
-| Tanh | 1 ~ 15 |
-| TensorListFromTensor | 7 ~ 15 |
-| TensorListGetItem | 7 ~ 15 |
-| TensorListLength | 7 ~ 15 |
-| TensorListReserve | 7 ~ 15 |
-| TensorListResize | 7 ~ 15 |
-| TensorListSetItem | 7 ~ 15 |
-| TensorListStack | 7 ~ 15 |
-| TensorScatterUpdate | 11 ~ 15 |
-| Tile | 1 ~ 15 |
-| TopKV2 | 1 ~ 15 |
-| Transpose | 1 ~ 15 |
-| TruncateDiv | 1 ~ 15 |
-| Unique | 11 ~ 15 |
-| Unpack | 1 ~ 15 |
-| UnsortedSegmentMax | 11 ~ 15 |
-| UnsortedSegmentMin | 11 ~ 15 |
-| UnsortedSegmentProd | 11 ~ 15 |
-| UnsortedSegmentSum | 11 ~ 15 |
-| Where | 9 ~ 15 |
-| While | 7 ~ 15 |
-| ZerosLike | 1 ~ 15 |
+| Abs | 1 ~ 17 |
+| Acos | 7 ~ 17 |
+| Acosh | 9 ~ 17 |
+| Add | 1 ~ 17 |
+| AddN | 6 ~ 17 |
+| AddV2 | 1 ~ 17 |
+| AdjustContrastv2 | 1 ~ 17 |
+| AdjustHue | 11 ~ 17 |
+| AdjustSaturation | 11 ~ 17 |
+| All | 6 ~ 17 |
+| Any | 6 ~ 17 |
+| ArgMax | 1 ~ 17 |
+| ArgMin | 1 ~ 17 |
+| AsString | 9 ~ 17 |
+| Asin | 7 ~ 17 |
+| Asinh | 9 ~ 17 |
+| Atan | 7 ~ 17 |
+| Atan2 | 9 ~ 17 |
+| Atanh | 9 ~ 17 |
+| AvgPool | 1 ~ 17 |
+| AvgPool3D | 1 ~ 17 |
+| BatchMatMul | 1 ~ 17 |
+| BatchMatMulV2 | 1 ~ 17 |
+| BatchToSpaceND | 1 ~ 17 |
+| BiasAdd | 1 ~ 17 |
+| BiasAddV1 | 1 ~ 17 |
+| Bincount | 11 ~ 17 |
+| BroadcastTo | 8 ~ 17 |
+| CTCGreedyDecoder | 11 ~ 17 |
+| Cast | 1 ~ 17 |
+| Ceil | 1 ~ 17 |
+| CheckNumerics | 1 ~ 17 |
+| ClipByValue | 8 ~ 17 |
+| CombinedNonMaxSuppression | 12 ~ 17 |
+| ComplexAbs | 1 ~ 17 |
+| Concat | 1 ~ 17 |
+| ConcatV2 | 1 ~ 17 |
+| Const | 1 ~ 17 |
+| ConstV2 | 1 ~ 17 |
+| Conv1D | 1 ~ 17 |
+| Conv2D | 1 ~ 17 |
+| Conv2DBackpropInput | 1 ~ 17 |
+| Conv3D | 1 ~ 17 |
+| Conv3DBackpropInputV2 | 1 ~ 17 |
+| Cos | 7 ~ 17 |
+| Cosh | 9 ~ 17 |
+| CropAndResize | 10 ~ 17 |
+| CudnnRNN | 10 ~ 17 |
+| Cumsum | 11 ~ 17 |
+| DenseBincount | 11 ~ 17 |
+| DenseToDenseSetOperation | 11 ~ 17 |
+| DepthToSpace | 1 ~ 17 |
+| DepthwiseConv2d | 1 ~ 17 |
+| DepthwiseConv2dNative | 1 ~ 17 |
+| Div | 1 ~ 17 |
+| DivNoNan | 9 ~ 17 |
+| Dropout | 1 ~ 17 |
+| DynamicPartition | 9 ~ 17 |
+| DynamicStitch | 10 ~ 17 |
+| Einsum | 12 ~ 17 |
+| Elu | 1 ~ 17 |
+| EnsureShape | 1 ~ 17 |
+| Equal | 1 ~ 17 |
+| Erf | 1 ~ 17 |
+| Exp | 1 ~ 17 |
+| ExpandDims | 1 ~ 17 |
+| FFT | 1 ~ 17 |
+| FIFOQueueV2 | 8 ~ 17 |
+| FakeQuantWithMinMaxArgs | 10 ~ 17 |
+| FakeQuantWithMinMaxVars | 10 ~ 17 |
+| Fill | 7 ~ 17 |
+| Flatten | 1 ~ 17 |
+| Floor | 1 ~ 17 |
+| FloorDiv | 6 ~ 17 |
+| FloorMod | 7 ~ 17 |
+| FusedBatchNorm | 6 ~ 17 |
+| FusedBatchNormV2 | 6 ~ 17 |
+| FusedBatchNormV3 | 6 ~ 17 |
+| Gather | 1 ~ 17 |
+| GatherNd | 1 ~ 17 |
+| GatherV2 | 1 ~ 17 |
+| Greater | 1 ~ 17 |
+| GreaterEqual | 7 ~ 17 |
+| HardSwish | 14 ~ 17 |
+| HashTableV2 | 8 ~ 17 |
+| Identity | 1 ~ 17 |
+| IdentityN | 1 ~ 17 |
+| If | 1 ~ 17 |
+| InvertPermutation | 11 ~ 17 |
+| IsFinite | 10 ~ 17 |
+| IsInf | 10 ~ 17 |
+| IsNan | 9 ~ 17 |
+| IteratorGetNext | 8 ~ 17 |
+| IteratorV2 | 8 ~ 17 |
+| LRN | 1 ~ 17 |
+| LSTMBlockCell | 1 ~ 17 |
+| LeakyRelu | 1 ~ 17 |
+| LeftShift | 11 ~ 17 |
+| Less | 1 ~ 17 |
+| LessEqual | 7 ~ 17 |
+| Log | 1 ~ 17 |
+| LogSoftmax | 1 ~ 17 |
+| LogicalAnd | 1 ~ 17 |
+| LogicalNot | 1 ~ 17 |
+| LogicalOr | 1 ~ 17 |
+| LookupTableFindV2 | 8 ~ 17 |
+| LookupTableSizeV2 | 1 ~ 17 |
+| Loop | 7 ~ 17 |
+| MatMul | 1 ~ 17 |
+| MatrixBandPart | 7 ~ 17 |
+| MatrixDeterminant | 11 ~ 17 |
+| MatrixDiag | 12 ~ 17 |
+| MatrixDiagPart | 11 ~ 17 |
+| MatrixDiagPartV2 | 11 ~ 17 |
+| MatrixDiagPartV3 | 11 ~ 17 |
+| MatrixDiagV2 | 12 ~ 17 |
+| MatrixDiagV3 | 12 ~ 17 |
+| MatrixSetDiagV3 | 12 ~ 17 |
+| Max | 1 ~ 17 |
+| MaxPool | 1 ~ 17 |
+| MaxPool3D | 1 ~ 17 |
+| MaxPoolV2 | 1 ~ 17 |
+| MaxPoolWithArgmax | 8 ~ 17 |
+| Maximum | 1 ~ 17 |
+| Mean | 1 ~ 17 |
+| Min | 1 ~ 17 |
+| Minimum | 1 ~ 17 |
+| MirrorPad | 1 ~ 17 |
+| Mul | 1 ~ 17 |
+| Multinomial | 7 ~ 17 |
+| Neg | 1 ~ 17 |
+| NoOp | 1 ~ 17 |
+| NonMaxSuppressionV2 | 10 ~ 17 |
+| NonMaxSuppressionV3 | 10 ~ 17 |
+| NonMaxSuppressionV4 | 10 ~ 17 |
+| NonMaxSuppressionV5 | 10 ~ 17 |
+| NotEqual | 1 ~ 17 |
+| OneHot | 1 ~ 17 |
+| Pack | 1 ~ 17 |
+| Pad | 1 ~ 17 |
+| PadV2 | 1 ~ 17 |
+| ParallelDynamicStitch | 10 ~ 17 |
+| Placeholder | 1 ~ 17 |
+| PlaceholderV2 | 1 ~ 17 |
+| PlaceholderWithDefault | 1 ~ 17 |
+| Pow | 1 ~ 17 |
+| Prelu | 1 ~ 17 |
+| Prod | 1 ~ 17 |
+| QueueDequeueManyV2 | 8 ~ 17 |
+| QueueDequeueUpToV2 | 8 ~ 17 |
+| QueueDequeueV2 | 8 ~ 17 |
+| RFFT | 1 ~ 17 |
+| RFFT2D | 1 ~ 17 |
+| RaggedGather | 11 ~ 17 |
+| RaggedRange | 11 ~ 17 |
+| RaggedTensorFromVariant | 13 ~ 17 |
+| RaggedTensorToSparse | 11 ~ 17 |
+| RaggedTensorToTensor | 11 ~ 17 |
+| RaggedTensorToVariant | 13 ~ 17 |
+| RandomNormal | 1 ~ 17 |
+| RandomNormalLike | 1 ~ 17 |
+| RandomShuffle | 10 ~ 17 |
+| RandomStandardNormal | 1 ~ 17 |
+| RandomUniform | 1 ~ 17 |
+| RandomUniformInt | 1 ~ 17 |
+| RandomUniformLike | 1 ~ 17 |
+| Range | 7 ~ 17 |
+| RealDiv | 1 ~ 17 |
+| Reciprocal | 1 ~ 17 |
+| Relu | 1 ~ 17 |
+| Relu6 | 1 ~ 17 |
+| Reshape | 1 ~ 17 |
+| ResizeBicubic | 7 ~ 17 |
+| ResizeBilinear | 7 ~ 17 |
+| ResizeNearestNeighbor | 7 ~ 17 |
+| ReverseSequence | 8 ~ 17 (Except 9) |
+| ReverseV2 | 10 ~ 17 |
+| RightShift | 11 ~ 17 |
+| Rint | 11 ~ 17 |
+| Roll | 10 ~ 17 |
+| Round | 1 ~ 17 |
+| Rsqrt | 1 ~ 17 |
+| SampleDistortedBoundingBox | 9 ~ 17 |
+| SampleDistortedBoundingBoxV2 | 9 ~ 17 |
+| Scan | 7 ~ 17 |
+| ScatterNd | 11 ~ 17 |
+| SegmentMax | 11 ~ 17 |
+| SegmentMean | 11 ~ 17 |
+| SegmentMin | 11 ~ 17 |
+| SegmentProd | 11 ~ 17 |
+| SegmentSum | 11 ~ 17 |
+| Select | 7 ~ 17 |
+| SelectV2 | 7 ~ 17 |
+| Selu | 1 ~ 17 |
+| Shape | 1 ~ 17 |
+| Sigmoid | 1 ~ 17 |
+| Sign | 1 ~ 17 |
+| Sin | 7 ~ 17 |
+| Sinh | 9 ~ 17 |
+| Size | 1 ~ 17 |
+| Slice | 1 ~ 17 |
+| Softmax | 1 ~ 17 |
+| SoftmaxCrossEntropyWithLogits | 7 ~ 17 |
+| Softplus | 1 ~ 17 |
+| Softsign | 1 ~ 17 |
+| SpaceToBatchND | 1 ~ 17 |
+| SpaceToDepth | 1 ~ 17 |
+| SparseFillEmptyRows | 11 ~ 17 |
+| SparseReshape | 11 ~ 17 |
+| SparseSegmentMean | 11 ~ 17 |
+| SparseSegmentMeanWithNumSegments | 11 ~ 17 |
+| SparseSegmentSqrtN | 11 ~ 17 |
+| SparseSegmentSqrtNWithNumSegments | 11 ~ 17 |
+| SparseSegmentSum | 11 ~ 17 |
+| SparseSegmentSumWithNumSegments | 11 ~ 17 |
+| SparseSoftmaxCrossEntropyWithLogits | 7 ~ 17 |
+| SparseToDense | 11 ~ 17 |
+| Split | 1 ~ 17 |
+| SplitV | 1 ~ 17 |
+| Sqrt | 1 ~ 17 |
+| Square | 1 ~ 17 |
+| SquaredDifference | 1 ~ 17 |
+| SquaredDistance | 12 ~ 17 |
+| Squeeze | 1 ~ 17 |
+| StatelessIf | 1 ~ 17 |
+| StatelessWhile | 7 ~ 17 |
+| StopGradient | 1 ~ 17 |
+| StridedSlice | 1 ~ 17 |
+| StringLower | 10 ~ 17 |
+| StringToNumber | 9 ~ 17 |
+| StringUpper | 10 ~ 17 |
+| Sub | 1 ~ 17 |
+| Sum | 1 ~ 17 |
+| TFL_CONCATENATION | 1 ~ 17 |
+| TFL_DEQUANTIZE | 1 ~ 17 |
+| TFL_PRELU | 7 ~ 17 |
+| TFL_QUANTIZE | 1 ~ 17 |
+| TFL_TFLite_Detection_PostProcess | 11 ~ 17 |
+| TFL_WHILE | 7 ~ 17 |
+| Tan | 7 ~ 17 |
+| Tanh | 1 ~ 17 |
+| TensorListFromTensor | 7 ~ 17 |
+| TensorListGetItem | 7 ~ 17 |
+| TensorListLength | 7 ~ 17 |
+| TensorListReserve | 7 ~ 17 |
+| TensorListResize | 7 ~ 17 |
+| TensorListSetItem | 7 ~ 17 |
+| TensorListStack | 7 ~ 17 |
+| TensorScatterAdd | 16 ~ 17 |
+| TensorScatterUpdate | 11 ~ 17 |
+| Tile | 1 ~ 17 |
+| TopKV2 | 1 ~ 17 |
+| Transpose | 1 ~ 17 |
+| TruncateDiv | 1 ~ 17 |
+| Unique | 11 ~ 17 |
+| Unpack | 1 ~ 17 |
+| UnsortedSegmentMax | 11 ~ 17 |
+| UnsortedSegmentMin | 11 ~ 17 |
+| UnsortedSegmentProd | 11 ~ 17 |
+| UnsortedSegmentSum | 11 ~ 17 |
+| Where | 9 ~ 17 |
+| While | 7 ~ 17 |
+| ZerosLike | 1 ~ 17 |
### Domain: "com.google.tensorflow"
| Tensorflow Op | Convertible to ONNX Op Versions |
| ------------- | ------------------------------- |
diff --git a/tests/backend_test_base.py b/tests/backend_test_base.py
index 9d08c306a..38cc52dcf 100644
--- a/tests/backend_test_base.py
+++ b/tests/backend_test_base.py
@@ -20,6 +20,7 @@
import onnx
from common import get_test_config
from tfjs_runner import run_tfjs
+from tf2onnx import constants
from tf2onnx import utils
from tf2onnx.tfonnx import process_tf_graph
from tf2onnx import optimizer
@@ -112,7 +113,7 @@ def assert_results_equal(self, expected, actual, rtol, atol, mtol=None,
check_value=True, check_shape=True, check_dtype=True):
for expected_val, actual_val in zip(expected, actual):
if check_value:
- if expected_val.dtype == np.object:
+ if expected_val.dtype == object:
# TFLite pads strings with nul bytes
decode = np.vectorize(lambda x: x.replace(b'\x00', b'').decode('UTF-8'))
expected_val_str = decode(expected_val)
@@ -366,6 +367,7 @@ def run_test_case(self, func, feed_dict, input_names_with_port, output_names_wit
graph_def_path = os.path.join(self.test_data_directory, self._testMethodName + "_after_tf_optimize.pb")
utils.save_protobuf(graph_def_path, graph_def)
self.logger.debug("created file %s", graph_def_path)
+ tfl_process_args = process_args.copy()
if test_tfjs:
tfjs_path = self.convert_to_tfjs(graph_def_path, output_names_with_port)
@@ -395,6 +397,10 @@ def run_test_case(self, func, feed_dict, input_names_with_port, output_names_wit
g = optimizer.optimize_graph(g, catch_errors=False)
actual = self.run_backend(g, output_names_with_port, onnx_feed_dict, large_model,
use_custom_ops=use_custom_ops)
+ if 'outputs_as_nchw' in tfl_process_args:
+ for output_name in tfl_process_args['outputs_as_nchw']:
+ i = output_names_with_port.index(output_name)
+ actual[i] = np.transpose(actual[i], constants.NCHW_TO_NHWC)
self.assert_results_equal(expected, actual, rtol, atol, mtol, check_value, check_shape, check_dtype)
self.assert_shapes_correct(g, self.config.allow_missing_shapes, not self.config.skip_onnx_checker)
@@ -410,12 +416,14 @@ def run_test_case(self, func, feed_dict, input_names_with_port, output_names_wit
if run_tfl_consistency_test:
self.assert_results_equal(expected, tfl_res, rtol, atol, mtol, check_value, check_shape, check_dtype)
- tfl_process_args = process_args.copy()
if 'inputs_as_nchw' in tfl_process_args:
nchw_inps_with_port = tfl_process_args['inputs_as_nchw']
tfl_process_args['inputs_as_nchw'] = [i.split(':')[0] for i in nchw_inps_with_port]
input_names_without_port = [inp.split(':')[0] for inp in feed_dict.keys()]
-
+ if 'outputs_as_nchw' in tfl_process_args:
+ nchw_outps_with_port = tfl_process_args['outputs_as_nchw']
+ tfl_process_args['outputs_as_nchw'] = [i.split(':')[0] for i in nchw_outps_with_port]
+ output_names_with_port = [i.split(':')[0] for i in nchw_outps_with_port]
g = process_tf_graph(None, opset=self.config.opset,
input_names=input_names_without_port,
output_names=tfl_outputs,
@@ -427,6 +435,10 @@ def run_test_case(self, func, feed_dict, input_names_with_port, output_names_wit
onnx_feed_dict_without_port = {k.split(':')[0]: v for k, v in onnx_feed_dict.items()}
onnx_tfl_res = self.run_backend(g, tfl_outputs, onnx_feed_dict_without_port,
postfix="_from_tflite", use_custom_ops=use_custom_ops)
+ if 'outputs_as_nchw' in tfl_process_args:
+ for output_name in tfl_process_args['outputs_as_nchw']:
+ i = output_names_with_port.index(output_name)
+ onnx_tfl_res[i] = np.transpose(onnx_tfl_res[i], constants.NCHW_TO_NHWC)
self.assert_results_equal(tfl_res, onnx_tfl_res, rtol, atol, mtol, check_value, check_shape, check_dtype)
self.assert_shapes_correct(g, self.config.allow_missing_shapes, not self.config.skip_onnx_checker)
@@ -456,6 +468,10 @@ def run_test_case(self, func, feed_dict, input_names_with_port, output_names_wit
g = optimizer.optimize_graph(g)
onnx_tfjs_res = self.run_backend(g, None, onnx_feed_dict, large_model,
postfix="_from_tfjs", use_custom_ops=use_custom_ops)
+ if 'outputs_as_nchw' in tfl_process_args:
+ for output_name in tfl_process_args['outputs_as_nchw']:
+ i = output_names_with_port.index(output_name)
+ onnx_tfjs_res[i] = np.transpose(onnx_tfjs_res[i], constants.NCHW_TO_NHWC)
self.assert_results_equal(tfjs_res, onnx_tfjs_res, rtol, atol, mtol, check_value, check_shape,
check_dtype=False)
diff --git a/tests/common.py b/tests/common.py
index 80144bb91..82ca09c49 100644
--- a/tests/common.py
+++ b/tests/common.py
@@ -9,7 +9,7 @@
import unittest
from collections import defaultdict
-from distutils.version import LooseVersion
+from packaging.version import Version
from parameterized import parameterized
import numpy as np
import tensorflow as tf
@@ -24,6 +24,8 @@
"check_onnxruntime_backend",
"check_tf_min_version",
"check_tf_max_version",
+ "check_tfjs_min_version",
+ "check_tfjs_max_version",
"skip_tf_versions",
"skip_tf_cpu",
"check_onnxruntime_min_version",
@@ -96,7 +98,7 @@ def _get_backend_version(self):
pass
if version:
- version = LooseVersion(version)
+ version = Version(version)
return version
def __str__(self):
@@ -176,7 +178,7 @@ def check_opset_after_tf_version(tf_version, required_opset, message=""):
""" Skip if tf_version > max_required_version """
config = get_test_config()
reason = _append_message("conversion requires opset {} after tf {}".format(required_opset, tf_version), message)
- skip = config.tf_version >= LooseVersion(tf_version) and config.opset < required_opset
+ skip = config.tf_version >= Version(tf_version) and config.opset < required_opset
return unittest.skipIf(skip, reason)
@@ -272,19 +274,42 @@ def requires_custom_ops(message=""):
can_import = False
return unittest.skipIf(not can_import, reason)
+def check_tfjs_max_version(max_accepted_version, message=""):
+ """ Skip if tfjs_version > max_required_version """
+ config = get_test_config()
+ reason = _append_message("conversion requires tensorflowjs <= {}".format(max_accepted_version), message)
+ try:
+ import tensorflowjs
+ can_import = True
+ except ModuleNotFoundError:
+ can_import = False
+ return unittest.skipIf(can_import and not config.skip_tfjs_tests and \
+ Version(tensorflowjs.__version__) > Version(max_accepted_version), reason)
+
+def check_tfjs_min_version(min_required_version, message=""):
+ """ Skip if tjs_version < min_required_version """
+ config = get_test_config()
+ reason = _append_message("conversion requires tensorflowjs >= {}".format(min_required_version), message)
+ try:
+ import tensorflowjs
+ can_import = True
+ except ModuleNotFoundError:
+ can_import = False
+ return unittest.skipIf(can_import and not config.skip_tfjs_tests and \
+ Version(tensorflowjs.__version__) < Version(min_required_version), reason)
def check_tf_max_version(max_accepted_version, message=""):
""" Skip if tf_version > max_required_version """
config = get_test_config()
reason = _append_message("conversion requires tf <= {}".format(max_accepted_version), message)
- return unittest.skipIf(config.tf_version > LooseVersion(max_accepted_version), reason)
+ return unittest.skipIf(config.tf_version > Version(max_accepted_version), reason)
def check_tf_min_version(min_required_version, message=""):
""" Skip if tf_version < min_required_version """
config = get_test_config()
reason = _append_message("conversion requires tf >= {}".format(min_required_version), message)
- return unittest.skipIf(config.tf_version < LooseVersion(min_required_version), reason)
+ return unittest.skipIf(config.tf_version < Version(min_required_version), reason)
def skip_tf_versions(excluded_versions, message=""):
@@ -360,7 +385,7 @@ def check_onnxruntime_min_version(min_required_version, message=""):
config = get_test_config()
reason = _append_message("conversion requires onnxruntime >= {}".format(min_required_version), message)
return unittest.skipIf(config.is_onnxruntime_backend and
- config.backend_version < LooseVersion(min_required_version), reason)
+ config.backend_version < Version(min_required_version), reason)
def skip_caffe2_backend(message=""):
diff --git a/tests/keras2onnx_unit_tests/mock_keras2onnx/proto/__init__.py b/tests/keras2onnx_unit_tests/mock_keras2onnx/proto/__init__.py
index b70b53512..d8720fe62 100644
--- a/tests/keras2onnx_unit_tests/mock_keras2onnx/proto/__init__.py
+++ b/tests/keras2onnx_unit_tests/mock_keras2onnx/proto/__init__.py
@@ -2,7 +2,7 @@
import os
import tensorflow
-from distutils.version import StrictVersion
+from packaging.version import Version
# Rather than using ONNX protobuf definition throughout our codebase, we import ONNX protobuf definition here so that
# we can conduct quick fixes by overwriting ONNX functions without changing any lines elsewhere.
@@ -22,11 +22,15 @@ def _check_onnx_version():
def is_tensorflow_older_than(version_str):
- return StrictVersion(tensorflow.__version__.split('-')[0]) < StrictVersion(version_str)
+ return Version(tensorflow.__version__.split('-')[0]) < Version(version_str)
def is_tensorflow_later_than(version_str):
- return StrictVersion(tensorflow.__version__.split('-')[0]) > StrictVersion(version_str)
+ return Version(tensorflow.__version__.split('-')[0]) > Version(version_str)
+
+
+def python_keras_is_deprecated():
+ return is_tensorflow_later_than("2.5.0")
is_tf_keras = False
@@ -38,7 +42,10 @@ def is_tensorflow_later_than(version_str):
is_tf_keras = str_tk_keras != '0'
if is_tf_keras:
- from tensorflow.python import keras
+ if python_keras_is_deprecated():
+ from tensorflow import keras
+ else:
+ from tensorflow.python import keras
else:
try:
import keras
@@ -47,12 +54,15 @@ def is_tensorflow_later_than(version_str):
is_tf_keras = True
except ImportError:
is_tf_keras = True
- from tensorflow.python import keras
+ if python_keras_is_deprecated():
+ from tensorflow import keras
+ else:
+ from tensorflow.python import keras
def is_keras_older_than(version_str):
- return StrictVersion(keras.__version__.split('-')[0]) < StrictVersion(version_str)
+ return Version(keras.__version__.split('-')[0]) < Version(version_str)
def is_keras_later_than(version_str):
- return StrictVersion(keras.__version__.split('-')[0]) > StrictVersion(version_str)
+ return Version(keras.__version__.split('-')[0]) > Version(version_str)
diff --git a/tests/keras2onnx_unit_tests/test_layers.py b/tests/keras2onnx_unit_tests/test_layers.py
index 1e32ea5dd..7d4a32979 100644
--- a/tests/keras2onnx_unit_tests/test_layers.py
+++ b/tests/keras2onnx_unit_tests/test_layers.py
@@ -6,13 +6,18 @@
from mock_keras2onnx.proto.tfcompat import is_tf2, tensorflow as tf
from mock_keras2onnx.proto import (keras, is_tf_keras,
is_tensorflow_older_than, is_tensorflow_later_than,
- is_keras_older_than, is_keras_later_than)
+ is_keras_older_than, is_keras_later_than, python_keras_is_deprecated)
from test_utils import no_loops_in_tf2, all_recurrents_should_bidirectional
K = keras.backend
Activation = keras.layers.Activation
Add = keras.layers.Add
-advanced_activations = keras.layers.advanced_activations
+if python_keras_is_deprecated():
+ advanced_activations = keras.layers
+ layers_core = keras.layers
+else:
+ advanced_activations = keras.layers.advanced_activations
+ layers_core = keras.layers.core
AlphaDropout = keras.layers.AlphaDropout
Average = keras.layers.Average
AveragePooling1D = keras.layers.AveragePooling1D
@@ -72,7 +77,7 @@
LSTM_CLASSES = [(LSTM, LSTMCell, "v1")]
RNN_CLASSES = [SimpleRNN, GRU, LSTM]
-if is_tf_keras and is_tensorflow_later_than("1.14.0"):
+if is_tf_keras and is_tensorflow_later_than("1.14.0") and not python_keras_is_deprecated():
# Add the TF v2 compatability layers (available after TF 1.14)
from tensorflow.python.keras.layers import recurrent_v2
GRU_CLASSES.append((recurrent_v2.GRU, "v2"))
@@ -1259,7 +1264,7 @@ def test_conv3d_transpose(conv3trans_runner):
def test_flatten(runner):
model = keras.Sequential()
- model.add(keras.layers.core.Flatten(input_shape=(3, 2)))
+ model.add(layers_core.Flatten(input_shape=(3, 2)))
model.add(Dense(3))
onnx_model = convert_keras(model, model.name)
@@ -1303,7 +1308,7 @@ def test_flatten2(runner):
def test_reshape(runner):
model = keras.Sequential()
- model.add(keras.layers.core.Reshape((2, 3), input_shape=(3, 2)))
+ model.add(layers_core.Reshape((2, 3), input_shape=(3, 2)))
onnx_model = convert_keras(model, model.name)
data = np.array([[[1, 2], [3, 4], [5, 6]]]).astype(np.float32)
@@ -1314,7 +1319,7 @@ def test_reshape(runner):
def test_permute(runner):
model = keras.Sequential()
- model.add(keras.layers.core.Permute((2, 1), input_shape=(3, 2)))
+ model.add(layers_core.Permute((2, 1), input_shape=(3, 2)))
onnx_model = convert_keras(model, model.name)
data = np.array([[[1, 2], [3, 4], [5, 6]]]).astype(np.float32)
@@ -1325,7 +1330,7 @@ def test_permute(runner):
def test_repeat_vector(runner):
model = keras.Sequential()
- model.add(keras.layers.core.RepeatVector(3, input_shape=(4,)))
+ model.add(layers_core.RepeatVector(3, input_shape=(4,)))
onnx_model = convert_keras(model, model.name)
data = _asarray(1, 2, 3, 4)
@@ -1596,11 +1601,11 @@ def test_crop(misc_conv_runner):
misc_conv_runner(layer, ishape, opset_)
for data_format_ in ['channels_last', 'channels_first']:
- ishape = (20, 20, 1)
+ ishape = (20, 20, 10)
for crop_v in [2, (2, 2), ((1, 2), (2, 3))]:
layer = Cropping2D(cropping=crop_v, data_format=data_format_)
misc_conv_runner(layer, ishape, opset_)
- ishape = (20, 20, 20, 1)
+ ishape = (20, 20, 20, 10)
for crop_v in [2, (2, 3, 4), ((1, 2), (2, 3), (3, 5))]:
layer = Cropping3D(cropping=crop_v, data_format=data_format_)
misc_conv_runner(layer, ishape, opset_)
diff --git a/tests/models/regression/tflite/test_api_model.tflite b/tests/models/regression/tflite/test_api_model.tflite
new file mode 100644
index 000000000..017255a82
Binary files /dev/null and b/tests/models/regression/tflite/test_api_model.tflite differ
diff --git a/tests/run_pretrained_models.py b/tests/run_pretrained_models.py
index 29f88af24..626cba0d5 100644
--- a/tests/run_pretrained_models.py
+++ b/tests/run_pretrained_models.py
@@ -17,7 +17,7 @@
import zipfile
import random
from collections import namedtuple
-from distutils.version import LooseVersion
+from packaging.version import Version
import yaml
@@ -525,7 +525,7 @@ def run_tflite():
inputs[k] = np_value.astype(expected_dtype)
else:
if expected_dtype == "string":
- inputs[k] = self.make_input(v).astype(np.str).astype(np.object)
+ inputs[k] = self.make_input(v).astype(np.str).astype(object)
else:
inputs[k] = self.make_input(v).astype(expected_dtype)
@@ -789,7 +789,7 @@ def main():
continue
if t.tf_min_version:
- if tf_utils.get_tf_version() < LooseVersion(str(t.tf_min_version)):
+ if tf_utils.get_tf_version() < Version(str(t.tf_min_version)):
logger.info("Skip %s: %s %s", test, "Min TF version needed:", t.tf_min_version)
continue
diff --git a/tests/run_pretrained_models.yaml b/tests/run_pretrained_models.yaml
index 14d0acfb7..38fa639fb 100644
--- a/tests/run_pretrained_models.yaml
+++ b/tests/run_pretrained_models.yaml
@@ -337,7 +337,7 @@ ssd_mobilenet_v3_large_coco:
opset_constraints:
"onnx":
"min": 10
- "max": 15
+ "max": 17
input_get: get_beach
inputs:
"normalized_input_image_tensor:0": [1, 320, 320, 3]
@@ -432,7 +432,7 @@ faster_rcnn_inception_v2_coco:
opset_constraints:
"onnx":
"min": 11
- "max": 15
+ "max": 17
input_get: get_beach
inputs:
"image_tensor:0": [1, 224, 224, 3]
diff --git a/tests/test_api.py b/tests/test_api.py
index 3bf170f03..2b25b1b64 100644
--- a/tests/test_api.py
+++ b/tests/test_api.py
@@ -173,6 +173,23 @@ def func(foo, a, x, b, w):
res_onnx = self.run_onnxruntime(output_path, {"x": x, "w": w}, output_names)
self.assertAllClose(res_tf, res_onnx[0], rtol=1e-5, atol=1e-5)
+ @check_tf_min_version("2.0")
+ def test_function_nparray(self):
+ @tf.function
+ def func(x):
+ return tf.math.sqrt(x)
+
+ output_path = os.path.join(self.test_data_directory, "model.onnx")
+ x = np.asarray([1.0, 2.0])
+
+ res_tf = func(x)
+ spec = np.asarray([[1.0, 2.0]])
+ model_proto, _ = tf2onnx.convert.from_function(func, input_signature=spec,
+ opset=self.config.opset, output_path=output_path)
+ output_names = [n.name for n in model_proto.graph.output]
+ res_onnx = self.run_onnxruntime(output_path, {'x': x}, output_names)
+ self.assertAllClose(res_tf, res_onnx[0], rtol=1e-5, atol=1e-5)
+
@check_tf_min_version("1.15")
def _test_graphdef(self):
def func(x, y):
@@ -214,6 +231,35 @@ def test_graphdef(self):
self.assertTrue(output_names[0] == "pred")
self.assertAllClose([2.1193342], oy[0], rtol=0.1, atol=0.1)
+ @check_tf_min_version("2.0")
+ def test_tflite(self):
+ output_path = os.path.join(self.test_data_directory, "model.onnx")
+
+ x_val = np.array([1.0, 2.0, -3.0, -4.0], dtype=np.float32).reshape((2, 2))
+ model_proto, _ = tf2onnx.convert.from_tflite("tests/models/regression/tflite/test_api_model.tflite",
+ input_names=["input"], output_names=["output"],
+ output_path=output_path)
+ actual_output_names = [n.name for n in model_proto.graph.output]
+ oy = self.run_onnxruntime(output_path, {"input": x_val}, actual_output_names)
+
+ self.assertTrue(actual_output_names[0] == "output")
+ exp_result = tf.add(x_val, x_val)
+ self.assertAllClose(exp_result, oy[0], rtol=0.1, atol=0.1)
+
+ @check_tf_min_version("2.0")
+ def test_tflite_without_input_output_names(self):
+ output_path = os.path.join(self.test_data_directory, "model.onnx")
+
+ x_val = np.array([1.0, 2.0, -3.0, -4.0], dtype=np.float32).reshape((2, 2))
+ model_proto, _ = tf2onnx.convert.from_tflite("tests/models/regression/tflite/test_api_model.tflite",
+ output_path=output_path)
+ actual_input_names = [n.name for n in model_proto.graph.input]
+ actual_output_names = [n.name for n in model_proto.graph.output]
+ oy = self.run_onnxruntime(output_path, {actual_input_names[0]: x_val}, output_names=None)
+
+ self.assertTrue(actual_output_names[0] == "output")
+ exp_result = tf.add(x_val, x_val)
+ self.assertAllClose(exp_result, oy[0], rtol=0.1, atol=0.1)
if __name__ == '__main__':
unittest_main()
diff --git a/tests/test_backend.py b/tests/test_backend.py
index 144f7ca76..8876da855 100755
--- a/tests/test_backend.py
+++ b/tests/test_backend.py
@@ -5,11 +5,11 @@
import os
import unittest
-from distutils.version import LooseVersion
from itertools import product
import numpy as np
from numpy.testing import assert_almost_equal
+from packaging.version import Version
import tensorflow as tf
from tensorflow.python.ops import lookup_ops
@@ -72,7 +72,7 @@
matrix_diag_part = tf.compat.v1.matrix_diag_part
fake_quant_with_min_max_args = tf.quantization.fake_quant_with_min_max_args
fake_quant_with_min_max_vars = tf.quantization.fake_quant_with_min_max_vars
-elif LooseVersion(tf.__version__) >= "1.13":
+elif Version(tf.__version__) >= Version("1.13"):
conv2d_backprop_input = tf.compat.v1.nn.conv2d_backprop_input
conv3d_transpose = tf.compat.v1.nn.conv3d_transpose
multinomial = tf.compat.v1.random.multinomial
@@ -86,7 +86,7 @@
quantize_and_dequantize = tf.compat.v1.quantization.quantize_and_dequantize
resize_nearest_neighbor = tf.compat.v1.image.resize_nearest_neighbor
resize_bilinear = tf.compat.v1.image.resize_bilinear
- if LooseVersion(tf.__version__) >= "1.14":
+ if Version(tf.__version__) >= Version("1.14"):
resize_bilinear_v2 = tf.compat.v2.image.resize
is_nan = tf.math.is_nan
is_inf = tf.math.is_inf
@@ -712,7 +712,7 @@ def func(x):
graph_validator=lambda g: (check_op_count(g, "RandomUniform", 0) and
check_op_count(g, "RandomUniformLike", 0)))
- def test_conv2d_with_input_transpose(self):
+ def test_inputs_as_nchw_arg(self):
x_shape = [2, 32, 32, 3]
kernel_shape = [3, 3, 3, 3]
x_val = make_xval(x_shape)
@@ -725,6 +725,17 @@ def func(x):
process_args={"inputs_as_nchw": [_INPUT]},
onnx_feed_dict={_INPUT: x_val_for_onnx})
+ def test_outputs_as_nchw_arg(self):
+ x_shape = [2, 32, 32, 3]
+ kernel_shape = [3, 3, 3, 3]
+ x_val = make_xval(x_shape)
+ def func(x):
+ kernel = tf.constant(make_xval(kernel_shape), dtype=tf.float32, name='kernel')
+ conv = tf.nn.conv2d(x, kernel, strides=[1, 1, 1, 1], padding="SAME")
+ return tf.identity(conv, name=_TFOUTPUT)
+ self._run_test_case(func, [_OUTPUT], {_INPUT: x_val}, rtol=1e-05,
+ process_args={"outputs_as_nchw": [_OUTPUT]})
+
@skip_tflite("TFlite adds ops that obscure pattern")
@check_tf_min_version("1.15")
def test_conv1d_dilations_rewriter(self):
@@ -1309,8 +1320,8 @@ def func(x1):
@check_onnxruntime_incompatibility("Add")
def test_logicaland(self):
- x_val1 = np.array([1, 0, 1, 1], dtype=np.bool).reshape((2, 2))
- x_val2 = np.array([0, 1, 1, 1], dtype=np.bool).reshape((2, 2))
+ x_val1 = np.array([1, 0, 1, 1], dtype=bool).reshape((2, 2))
+ x_val2 = np.array([0, 1, 1, 1], dtype=bool).reshape((2, 2))
def func(x1, x2):
mi = tf.logical_and(x1, x2)
return tf.identity(mi, name=_TFOUTPUT)
@@ -3222,6 +3233,8 @@ def func(x, y):
y_val = np.array(i / 10, np.float32)
self._run_test_case(func, [_OUTPUT], {_INPUT: x_val, _INPUT1: y_val}, rtol=1e-6, atol=2e-5)
+ # https://github.com/microsoft/onnxruntime/issues/12302
+ @skip_onnxruntime_backend("resize op can't work well under Cubic mode with ORT 1.12")
@check_tf_min_version("2.0", "Results are slightly different in tf1")
@check_opset_min_version(11, "resize bicubic")
def test_resize_bicubic(self):
@@ -3492,9 +3505,9 @@ def func(x):
def test_where_bool(self):
x_val = np.array([1, 2, -3, 4, -5], dtype=np.float32)
true_result = np.array([True, False, True, False, True],
- dtype=np.bool)
+ dtype=bool)
false_result = np.array([False, True, False, True, True],
- dtype=np.bool)
+ dtype=bool)
def func(x):
picks = tf.where(x > -1, true_result, false_result)
return tf.identity(picks, name=_TFOUTPUT)
@@ -3757,21 +3770,21 @@ def func(input_1, input_2):
self._run_test_case(func, [_OUTPUT], {_INPUT: input_val_1, _INPUT1: input_val_2}, rtol=1e-4)
def test_logical_not(self):
- input_val = np.random.randint(0, 2, (10, 20)).astype(np.bool)
+ input_val = np.random.randint(0, 2, (10, 20)).astype(bool)
def func(x):
res = tf.logical_not(x)
return tf.identity(res, name=_TFOUTPUT)
self._run_test_case(func, [_OUTPUT], {_INPUT: input_val})
def test_reduce_all(self):
- input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
+ input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(x):
res = tf.reduce_all(input_tensor=x, keepdims=False)
res1 = tf.reduce_all(input_tensor=x, axis=[0], keepdims=False)
return tf.identity(res, name=_TFOUTPUT), tf.identity(res1, name=_TFOUTPUT1)
self._run_test_case(func, [_OUTPUT, _OUTPUT1], {_INPUT: input_val})
- input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
+ input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(input_x):
res = tf.reduce_all(input_tensor=input_x, keepdims=True)
res1 = tf.reduce_all(input_tensor=input_x, axis=[0], keepdims=True)
@@ -3779,14 +3792,14 @@ def func(input_x):
self._run_test_case(func, [_OUTPUT, _OUTPUT1], {_INPUT: input_val})
def test_reduce_any(self):
- input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
+ input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(x):
res = tf.reduce_any(input_tensor=x, keepdims=False)
res1 = tf.reduce_any(input_tensor=x, axis=[0], keepdims=False)
return tf.identity(res, name=_TFOUTPUT), tf.identity(res1, name=_TFOUTPUT1)
self._run_test_case(func, [_OUTPUT, _OUTPUT1], {_INPUT: input_val})
- input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
+ input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(x):
res = tf.reduce_any(input_tensor=x, keepdims=True)
res1 = tf.reduce_any(input_tensor=x, axis=[0], keepdims=True)
@@ -3795,14 +3808,14 @@ def func(x):
@check_opset_min_version(11, "ReduceMin")
def test_reduce_all_negative_axis(self):
- input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
+ input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(x):
res = tf.reduce_all(input_tensor=x, keepdims=False)
res1 = tf.reduce_all(input_tensor=x, axis=[-1], keepdims=False)
return tf.identity(res, name=_TFOUTPUT), tf.identity(res1, name=_TFOUTPUT1)
self._run_test_case(func, [_OUTPUT, _OUTPUT1], {_INPUT: input_val})
- input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
+ input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(input_x):
res = tf.reduce_all(input_tensor=input_x, keepdims=True)
res1 = tf.reduce_all(input_tensor=input_x, axis=[-1], keepdims=True)
@@ -3811,14 +3824,14 @@ def func(input_x):
@check_opset_min_version(11, "ReduceSum")
def test_reduce_any_negative_axis(self):
- input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
+ input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(x):
res = tf.reduce_any(input_tensor=x, keepdims=False)
res1 = tf.reduce_any(input_tensor=x, axis=[-1], keepdims=False)
return tf.identity(res, name=_TFOUTPUT), tf.identity(res1, name=_TFOUTPUT1)
self._run_test_case(func, [_OUTPUT, _OUTPUT1], {_INPUT: input_val})
- input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
+ input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(x):
res = tf.reduce_any(input_tensor=x, keepdims=True)
res1 = tf.reduce_any(input_tensor=x, axis=[-1], keepdims=True)
@@ -3828,7 +3841,7 @@ def func(x):
@check_opset_min_version(11, "ReduceSum")
@check_tf_min_version("1.15")
def test_reduce_any_empty_axis(self):
- input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
+ input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(x):
res = tf.reduce_any(input_tensor=x, keepdims=False)
res1 = tf.reduce_any(input_tensor=x, axis=[], keepdims=False)
@@ -3836,7 +3849,7 @@ def func(x):
self._run_test_case(func, [_OUTPUT, _OUTPUT1], {_INPUT: input_val})
def test_reduce_all_scalar_axis(self):
- input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
+ input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(x):
res = tf.reduce_all(input_tensor=x, keepdims=False)
res1 = tf.reduce_all(input_tensor=x, axis=0, keepdims=False)
@@ -3846,7 +3859,7 @@ def func(x):
@check_opset_min_version(13, "ReduceSum")
@check_tf_min_version("1.15")
def test_reduce_any_nonconst_axis(self):
- input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
+ input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
y_val = np.array([1], np.int32)
def func(x, y):
res = tf.reduce_any(input_tensor=x, axis=y, keepdims=False)
@@ -3876,6 +3889,19 @@ def func(x, y):
self._run_test_case(func, [_OUTPUT], {_INPUT: input_x > 0.5, _INPUT1: input_y})
+ @check_opset_min_version(9, "ConstantOfShape")
+ def test_zeros_like_opset9(self):
+ input_x = np.random.random_sample([3, 16, 16]).astype(np.float32)
+ input_y = np.array([16, 16, 3]).astype(np.int64)
+
+ def func(x, y):
+ z = tf.reshape(x, y)
+ return tf.zeros_like(z, name=_TFOUTPUT)
+
+ self._run_test_case(func, [_OUTPUT], {_INPUT: input_x, _INPUT1: input_y})
+ self._run_test_case(func, [_OUTPUT], {_INPUT: input_x.astype(np.int32), _INPUT1: input_y}, as_session=True,
+ graph_validator=lambda g: check_op_count(g, "ConstantOfShape", 1, disabled=False))
+
@check_opset_min_version(9, "is_nan")
def test_isnan(self):
# only compatible with dtype `float32`
@@ -4698,6 +4724,17 @@ def func(x, y, z):
return tf.identity(x_, name=_TFOUTPUT)
self._run_test_case(func, [_OUTPUT], {_INPUT: x_val, _INPUT1: y_val, _INPUT2: z_val})
+ @check_opset_min_version(16, "ScatterND")
+ def test_scatternd_add(self):
+ x_val = np.array([10, 20, 30, 40], dtype=np.int32).reshape((4))
+ y_val = np.array([0, 2], dtype=np.int64).reshape((2, 1))
+ z_val = np.array([20, 30], dtype=np.int32).reshape((2))
+
+ def func(x, y, z):
+ x_ = tf.tensor_scatter_nd_add(x, y, z)
+ return tf.identity(x_, name=_TFOUTPUT)
+ self._run_test_case(func, [_OUTPUT], {_INPUT: x_val, _INPUT1: y_val, _INPUT2: z_val})
+
@check_opset_min_version(11, "ScatterND")
def test_scatternd_1d(self):
x_val = np.array([4, 3, 1, 7], dtype=np.int32).reshape((4, 1))
@@ -5219,7 +5256,7 @@ def func(value, filters, output_shape):
def test_hashtable_lookup(self):
filnm = "vocab.tmp"
words = ["apple", "pear", "banana", "cherry", "grape"]
- query = np.array(['cherry'], dtype=np.object)
+ query = np.array(['cherry'], dtype=object)
with open(filnm, "w") as f:
for word in words:
f.write(word + "\n")
@@ -5236,7 +5273,7 @@ def func(query_holder):
def test_hashtable_lookup_const(self):
filnm = "vocab.tmp"
words = ["apple", "pear", "banana", "cherry ♥", "grape"]
- query_val = np.array(['cherry ♥', 'banana'], dtype=np.object).reshape((1, 2, 1))
+ query_val = np.array(['cherry ♥', 'banana'], dtype=object).reshape((1, 2, 1))
with open(filnm, "w", encoding='UTF-8') as f:
for word in words:
f.write(word + "\n")
@@ -5253,7 +5290,7 @@ def func():
def test_hashtable_size(self):
filnm = "vocab.tmp"
words = ["apple", "pear", "banana", "cherry", "grape"]
- query = np.array(['cherry'], dtype=np.object)
+ query = np.array(['cherry'], dtype=object)
with open(filnm, "w") as f:
for word in words:
f.write(word + "\n")
@@ -5842,10 +5879,10 @@ def func(x):
return tf.identity(op_, name=_TFOUTPUT)
# tf gets this wrong and returns fp32 instead of int
- x_val = np.array("123", dtype=np.object)
+ x_val = np.array("123", dtype=object)
self._run_test_case(func, [_OUTPUT], {_INPUT: x_val})
- x_val = np.array("123.1", dtype=np.object)
+ x_val = np.array("123.1", dtype=object)
# can't check the values because in onnx they are padded with 0, in tf they are not
self._run_test_case(func, [_OUTPUT], {_INPUT: x_val}, check_value=False)
@@ -5862,6 +5899,32 @@ def func(x):
x_val = np.array([0.5, 1.0, -0.5, -1.0], dtype=np.float32).reshape((2, 2))
self._run_test_case(func, [_OUTPUT], {_INPUT: x_val})
+ @skip_tfjs("not supported in tfjs")
+ def test_l2normalization(self):
+ def func(x):
+ op_ = tf.math.l2_normalize(x)
+ return tf.identity(op_, name=_TFOUTPUT)
+
+ x_val = make_xval([3, 4])
+ self._run_test_case(func, [_OUTPUT], {_INPUT: x_val})
+
+ @check_opset_min_version(10, "Slice")
+ def test_addition_two_newaxis_simultaneously(self):
+ def func(x):
+ op = x[..., tf.newaxis, tf.newaxis]
+ return tf.identity(op, name=_TFOUTPUT)
+
+ x_val = make_xval([2, 3])
+ self._run_test_case(func, [_OUTPUT], {_INPUT: x_val})
+
+ @check_opset_min_version(10, "Slice")
+ def test_addition_three_newaxis_simultaneously(self):
+ def func(x):
+ op = x[..., tf.newaxis, tf.newaxis, tf.newaxis]
+ return tf.identity(op, name=_TFOUTPUT)
+
+ x_val = make_xval([2, 3])
+ self._run_test_case(func, [_OUTPUT], {_INPUT: x_val})
if __name__ == '__main__':
unittest_main()
diff --git a/tests/test_cond.py b/tests/test_cond.py
index 8d74c8dcc..7fa8c5dbc 100644
--- a/tests/test_cond.py
+++ b/tests/test_cond.py
@@ -118,6 +118,7 @@ def false_fn():
output_names_with_port = ["output:0"]
self.run_test_case(func, feed_dict, input_names_with_port, output_names_with_port)
+ @check_tfjs_max_version("3.15", "failed when tfjs version > 3.15")
def test_cond_in_while_loop(self):
def func(i, inputs):
inputs_2 = tf.identity(inputs)
diff --git a/tests/test_internals.py b/tests/test_internals.py
index acd91d0cd..732616a9f 100644
--- a/tests/test_internals.py
+++ b/tests/test_internals.py
@@ -107,10 +107,10 @@ def test_insert_node2(self):
def test_make_const_string(self):
graph_proto = self.sample_net()
g = GraphUtil.create_graph_from_onnx_graph(graph_proto)
- arr1 = np.array("test", np.object)
- arr2 = np.array([["A", "B"], ["C", "D"]], np.object)
- arr3 = np.array(b"test", np.object)
- arr4 = np.array([[b"A", b"B"], [b"C", b"D"]], np.object)
+ arr1 = np.array("test", object)
+ arr2 = np.array([["A", "B"], ["C", "D"]], object)
+ arr3 = np.array(b"test", object)
+ arr4 = np.array([[b"A", b"B"], [b"C", b"D"]], object)
const1 = g.make_const("const1", arr1)
const2 = g.make_const("const2", arr2)
const3 = g.make_const("const3", arr3)
diff --git a/tests/test_loops.py b/tests/test_loops.py
index b0b8f9213..410bee378 100644
--- a/tests/test_loops.py
+++ b/tests/test_loops.py
@@ -7,7 +7,8 @@
import tensorflow as tf
from backend_test_base import Tf2OnnxBackendTestBase
-from common import unittest_main, check_tf_min_version, check_tf_max_version, check_onnxruntime_min_version
+from common import unittest_main, check_tf_min_version, check_tf_max_version, \
+ check_onnxruntime_min_version, check_tfjs_max_version
from tf2onnx.tf_loader import is_tf2
@@ -66,6 +67,7 @@ def func(i):
x_val = np.array(3, dtype=np.int32)
self.run_test_case(func, {_INPUT: x_val}, [], [_OUTPUT], rtol=1e-06)
+ @check_tfjs_max_version("3.15", "failed when tfjs version > 3.15")
def test_while_loop_with_ta_write(self):
def func(i):
output_ta = tf.TensorArray(dtype=tf.int32, size=0, dynamic_size=True)
@@ -159,6 +161,7 @@ def b(i, res, res2):
output_names_with_port = ["i:0", "x:0", "y:0"]
self.run_test_case(func, feed_dict, input_names_with_port, output_names_with_port, rtol=1e-06)
+ @check_tfjs_max_version("3.15", "failed when tfjs version > 3.15")
def test_while_loop_with_ta_read_and_write(self):
def func(i, inputs):
inputs_2 = tf.identity(inputs)
@@ -183,6 +186,7 @@ def b(i, out_ta):
output_names_with_port = ["i:0", "output_ta:0"]
self.run_test_case(func, feed_dict, input_names_with_port, output_names_with_port, rtol=1e-06)
+ @check_tfjs_max_version("3.15", "failed when tfjs version > 3.15")
def test_while_loop_with_multi_scan_outputs(self):
def func(i, inputs1, inputs2):
inputs1_ = tf.identity(inputs1)
@@ -217,6 +221,7 @@ def b(i, out_ta, out_ta2):
output_names_with_port = ["i:0", "output_ta:0", "output_ta2:0"]
self.run_test_case(func, feed_dict, input_names_with_port, output_names_with_port, rtol=1e-06)
+ @check_tfjs_max_version("3.15", "failed when tfjs version > 3.15")
@check_onnxruntime_min_version(
"0.5.0",
"disable this case due to onnxruntime loop issue: https://github.com/microsoft/onnxruntime/issues/1272"
diff --git a/tests/test_lstm.py b/tests/test_lstm.py
index 736935285..a79829e87 100644
--- a/tests/test_lstm.py
+++ b/tests/test_lstm.py
@@ -751,6 +751,28 @@ def func(x):
return tf.identity(y[0], name="output"), tf.identity(y[1], name="output1")
self.run_test_case(func, {"input:0": x_val}, [], ["output:0", "output1:0"], rtol=1e-05, atol=1e-06)
+ @check_tf_min_version("2.0")
+ @skip_tf_versions("2.1", "Bug in TF 2.1")
+ def test_keras_lstm_recurrent_activation_is_hard_sigmoid(self):
+ in_shape = [10, 3]
+ x_val = np.random.uniform(size=[2, 10, 3]).astype(np.float32)
+
+ model_in = tf.keras.layers.Input(tuple(in_shape), batch_size=2)
+ x = tf.keras.layers.LSTM(
+ units=5,
+ return_sequences=True,
+ return_state=True,
+ kernel_initializer=tf.random_uniform_initializer(0.0, 1.0, seed=42),
+ recurrent_initializer=tf.random_uniform_initializer(0.0, 1.0, seed=44),
+ bias_initializer=tf.random_uniform_initializer(0.0, 1.0, seed=43),
+ recurrent_activation="hard_sigmoid"
+ )(model_in)
+ model = tf.keras.models.Model(inputs=model_in, outputs=x)
+
+ def func(x):
+ y = model(x)
+ return tf.identity(y[0], name="output"), tf.identity(y[1], name="output1")
+ self.run_test_case(func, {"input:0": x_val}, [], ["output:0", "output1:0"], rtol=1e-05, atol=1e-06)
if __name__ == '__main__':
unittest_main()
diff --git a/tests/test_onnx_shape_inference.py b/tests/test_onnx_shape_inference.py
index 6272fa2ff..244e1b6bb 100644
--- a/tests/test_onnx_shape_inference.py
+++ b/tests/test_onnx_shape_inference.py
@@ -353,7 +353,7 @@ def test_if(self):
sub = else_subgraph.make_node("Sub", [INPUT1, INPUT3])
else_subgraph.add_graph_output(sub.output[0])
- cond = graph.make_const("cond", np.array(True, dtype=np.bool))
+ cond = graph.make_const("cond", np.array(True, dtype=bool))
branches = {"then_branch": then_subgraph, "else_branch": else_subgraph}
if_node = graph.make_node("If", [cond.output[0]], branches=branches)
@@ -381,7 +381,7 @@ def test_loop(self):
subgraph.add_graph_output(out.output[0])
max_iter = graph.make_const("max_iter", np.array([10], dtype=np.int64))
- cond_const = graph.make_const("cond_const", np.array([True], dtype=np.bool))
+ cond_const = graph.make_const("cond_const", np.array([True], dtype=bool))
branches = {"body": subgraph}
loop = graph.make_node("Loop", [max_iter.output[0], cond_const.output[0], INPUT1],
output_count=2, branches=branches)
diff --git a/tests/test_optimizers.py b/tests/test_optimizers.py
index 7ab159d0e..2495bab58 100644
--- a/tests/test_optimizers.py
+++ b/tests/test_optimizers.py
@@ -145,7 +145,7 @@ def test_transpose_with_split(self, input_shape, perm, inner_perm):
((1, -1), (1, 1710), (1710,), [1, 0]),
((3, 1, 1, 5, -1), (3, 1, 1, 5, 6), (3, 5, 6), [0, 2, 3, 4, 1]),
])
- @check_opset_max_version(12, "split attribute changed to input in opset 13")
+ @check_opset_max_version(12, "split attribute changed to input since opset 13")
def test_transpose_with_split_dynamic_shape(self, input_shape, specific_input, output_shape, perm):
node1 = helper.make_node("Transpose", ["X"], ["Y"], perm=perm, name="trans")
node2 = helper.make_node("Split", ["Y"], ["Z"], axis=1, split=[1], name="split")
@@ -162,6 +162,31 @@ def test_transpose_with_split_dynamic_shape(self, input_shape, specific_input, o
self.run_transpose_compare(["B"], {"X": np.random.randn(*specific_input).astype(np.float32)},
model_proto, remaining_transpose_num=0)
+ @parameterized.expand([
+ ((3, 1, 1), (1, 1, 3), (1), [0, 2, 3, 1]),
+ ((256, 1, 1), (1, 1, 256), (1), [0, 2, 3, 1])
+ ])
+ @check_opset_min_version(13, "split attribute changed to input since opset 13")
+ def test_transpose_with_split_opset13(self, input_shape, output_shape, split_val, perm):
+ unsqueeze_axes = self._make_onnx_const(np.array([0], dtype=np.int64), "axes1")
+ unsqueeze = helper.make_node("Unsqueeze", ["X", "axes1"], ["Y"], name="unsqueeze")
+ trans = helper.make_node("Transpose", ["Y"], ["Z"], perm=perm, name="trans")
+ split_attr = self._make_onnx_const(np.array([split_val], dtype=np.int64), "split_attr")
+ split = helper.make_node("Split", ["Z", "split_attr"], ["A"], axis=0, name="split")
+ squeeze_axes = self._make_onnx_const(np.array([1], dtype=np.int64), "axes2")
+ squeeze = helper.make_node("Squeeze", ["A", "axes2"], ["B"], name="squeeze")
+
+ graph = helper.make_graph(
+ [unsqueeze_axes, unsqueeze, trans, split_attr, split, squeeze_axes, squeeze],
+ "test_transpose_with_split_opset13",
+ [helper.make_tensor_value_info("X", TensorProto.FLOAT, input_shape)],
+ [helper.make_tensor_value_info("B", TensorProto.FLOAT, output_shape)],
+ )
+
+ model_proto = self.make_model(graph, producer_name="onnx-tests")
+ self.run_transpose_compare(["B"], {"X": np.random.randn(*input_shape).astype(np.float32)},
+ model_proto, remaining_transpose_num=0)
+
@parameterized.expand([
((2, 3, 4), [2, 0, 1], [1, 2, 0]),
((2, 3, 4, 5), [0, 2, 3, 1], [0, 3, 1, 2]),
@@ -717,7 +742,7 @@ def test_transpose_sqrt(self, shape, perm_input, perm_output):
((1, 3, 4, 5), (4, 5, 3), [0, 2, 3, 1], [1, 2, 0]),
((1, 3, 4, 5, 6), (4, 5, 6, 3), [0, 2, 3, 4, 1], [1, 2, 3, 0]),
])
- @check_opset_max_version(12, "Squeeze/Unsqueeze changed in opset 13")
+ @check_opset_max_version(12, "Squeeze/Unsqueeze changed since opset 13")
def test_transpose_with_squeeze1(self, input_shape, output_shape, perm, expected_perm):
# squeeze the first dim
node1 = helper.make_node("Transpose", ["X"], ["Y"], perm=perm, name="trans")
@@ -768,7 +793,7 @@ def test_transpose_with_unsqueeze(self, input_shape, output_shape, perm, axes_va
((1, 3, 4, 5), (4, 5, 3), [0, 2, 3, 1], [1, 2, 0]),
((1, 3, 4, 5, 6), (4, 5, 6, 3), [0, 2, 3, 4, 1], [1, 2, 3, 0]),
])
- @check_opset_min_version(13, "Squeeze/Unsqueeze changed in opset 13")
+ @check_opset_min_version(13, "Squeeze/Unsqueeze changed since opset 13")
def test_transpose_with_squeeze1_13(self, input_shape, output_shape, perm, expected_perm):
# squeeze the first dim
node1 = helper.make_node("Transpose", ["X"], ["Y"], perm=perm, name="trans")
@@ -791,7 +816,7 @@ def test_transpose_with_squeeze1_13(self, input_shape, output_shape, perm, expec
((3, 4, 1, 5), (3, 5, 4), [0, 2, 3, 1], [0, 2, 1]),
((3, 4, 1, 5, 6), (3, 5, 6, 4), [0, 2, 3, 4, 1], [0, 2, 3, 1]),
])
- @check_opset_max_version(12, "Squeeze/Unsqueeze changed in opset 13")
+ @check_opset_max_version(12, "Squeeze/Unsqueeze changed since opset 13")
def test_transpose_with_squeeze2(self, input_shape, output_shape, perm, expected_perm):
# squeeze the second dim
node1 = helper.make_node("Transpose", ["X"], ["Y"], perm=perm, name="trans")
@@ -813,7 +838,7 @@ def test_transpose_with_squeeze2(self, input_shape, output_shape, perm, expected
((3, 4, 1, 5), (3, 5, 4), [0, 2, 3, 1], [0, 2, 1]),
((3, 4, 1, 5, 6), (3, 5, 6, 4), [0, 2, 3, 4, 1], [0, 2, 3, 1]),
])
- @check_opset_min_version(13, "Squeeze/Unsqueeze changed in opset 13")
+ @check_opset_min_version(13, "Squeeze/Unsqueeze changed since opset 13")
def test_transpose_with_squeeze2_13(self, input_shape, output_shape, perm, expected_perm):
# squeeze the second dim
node1 = helper.make_node("Transpose", ["X"], ["Y"], perm=perm, name="trans")
@@ -836,7 +861,7 @@ def test_transpose_with_squeeze2_13(self, input_shape, output_shape, perm, expec
((3, 1, 4, 5), (3, 4, 5), [0, 2, 3, 1]),
((3, 1, 4, 5, 6), (3, 4, 5, 6), [0, 2, 3, 4, 1]),
])
- @check_opset_max_version(12, "Squeeze/Unsqueeze changed in opset 13")
+ @check_opset_max_version(12, "Squeeze/Unsqueeze changed since opset 13")
def test_transpose_with_squeeze3(self, input_shape, output_shape, perm):
# squeeze the last dim
node1 = helper.make_node("Transpose", ["X"], ["Y"], perm=perm, name="trans")
@@ -857,7 +882,7 @@ def test_transpose_with_squeeze3(self, input_shape, output_shape, perm):
((3, 1, 4, 5), (3, 4, 5), [0, 2, 3, 1]),
((3, 1, 4, 5, 6), (3, 4, 5, 6), [0, 2, 3, 4, 1]),
])
- @check_opset_min_version(13, "Squeeze/Unsqueeze changed in opset 13")
+ @check_opset_min_version(13, "Squeeze/Unsqueeze changed since opset 13")
def test_transpose_with_squeeze3_13(self, input_shape, output_shape, perm):
# squeeze the last dim
node1 = helper.make_node("Transpose", ["X"], ["Y"], perm=perm, name="trans")
@@ -879,7 +904,7 @@ def test_transpose_with_squeeze3_13(self, input_shape, output_shape, perm):
((3, 1, 1, 5), (3, 5), [0, 2, 3, 1]),
((3, 1, 1, 5, 4), (3, 5, 4), [0, 2, 3, 4, 1]),
])
- @check_opset_max_version(12, "Squeeze/Unsqueeze changed in opset 13")
+ @check_opset_max_version(12, "Squeeze/Unsqueeze changed since opset 13")
def test_transpose_with_squeeze4(self, input_shape, output_shape, perm):
# squeeze the two dims
node1 = helper.make_node("Transpose", ["X"], ["Y"], perm=perm, name="trans")
@@ -900,7 +925,7 @@ def test_transpose_with_squeeze4(self, input_shape, output_shape, perm):
((3, 1, 1, 5), (3, 5), [0, 2, 3, 1]),
((3, 1, 1, 5, 4), (3, 5, 4), [0, 2, 3, 4, 1]),
])
- @check_opset_min_version(13, "Squeeze/Unsqueeze changed in opset 13")
+ @check_opset_min_version(13, "Squeeze/Unsqueeze changed since opset 13")
def test_transpose_with_squeeze4_13(self, input_shape, output_shape, perm):
# squeeze the two dims
node1 = helper.make_node("Transpose", ["X"], ["Y"], perm=perm, name="trans")
@@ -963,7 +988,7 @@ def _define_loop_graph(external_inputs):
def _make_loop(external_inputs, outputs):
trip_cnt = self._make_onnx_const(np.array(10, dtype=np.int64), "trip_cnt")
- cond = self._make_onnx_const(np.array(True, dtype=np.bool), "cond")
+ cond = self._make_onnx_const(np.array(True, dtype=bool), "cond")
sub_graph = _define_loop_graph(external_inputs)
loop_node = helper.make_node("Loop", ["trip_cnt", "cond", "cond"], outputs,
name="loop", body=sub_graph)
@@ -1369,6 +1394,130 @@ def test_transpose_argmax(self):
self.run_transpose_compare(["res"], {"X": np.random.randn(*input_shape).astype(np.float32)},
model_proto, remaining_transpose_num=0)
+ @check_opset_max_version(
+ 12, "Before opset 13, Softmax coerced its inputs to 2D and can thus only be optimized for certain permutations"
+ )
+ def test_transpose_softmax_valid_perm(self):
+ input_shape = [4, 4, 4, 4]
+ node0 = helper.make_node("Transpose", ["X"], ["Y"], perm=[0, 2, 3, 1], name="trans_1")
+ node1 = helper.make_node("Softmax", ["Y"], ["Z"], axis=1, name="softmax")
+ node2 = helper.make_node("Transpose", ["Z"], ["res"], perm=[0, 3, 1, 2], name="trans_2")
+
+ graph = helper.make_graph(
+ [node0, node1, node2],
+ "transpose-softmax-test",
+ [helper.make_tensor_value_info("X", TensorProto.FLOAT, input_shape)],
+ [helper.make_tensor_value_info("res", TensorProto.FLOAT, input_shape)],
+ )
+
+ model_proto = self.make_model(graph, producer_name="onnx-tests")
+ self.run_transpose_compare(
+ ["res"], {"X": np.random.randn(*input_shape).astype(np.float32)}, model_proto, remaining_transpose_num=0
+ )
+
+ @check_opset_max_version(
+ 12, "Before opset 13, Softmax coerced its inputs to 2D and can thus only be optimized for certain permutations"
+ )
+ def test_transpose_softmax_invalid_perm(self):
+ input_shape = [4, 4, 4, 4]
+ node0 = helper.make_node("Transpose", ["X"], ["Y"], perm=[0, 2, 3, 1], name="trans_1")
+ node1 = helper.make_node("Softmax", ["Y"], ["Z"], axis=3, name="softmax")
+ node2 = helper.make_node("Transpose", ["Z"], ["res"], perm=[0, 3, 1, 2], name="trans_2")
+
+ graph = helper.make_graph(
+ [node0, node1, node2],
+ "transpose-softmax-test",
+ [helper.make_tensor_value_info("X", TensorProto.FLOAT, input_shape)],
+ [helper.make_tensor_value_info("res", TensorProto.FLOAT, input_shape)],
+ )
+
+ model_proto = self.make_model(graph, producer_name="onnx-tests")
+ self.run_transpose_compare(
+ ["res"], {"X": np.random.randn(*input_shape).astype(np.float32)}, model_proto, remaining_transpose_num=2
+ )
+
+ @check_opset_min_version(13, "Softmax can be optimized for all permutations since opset 13")
+ def test_transpose_softmax_13(self):
+ input_shape = [4, 4, 4, 4]
+ node0 = helper.make_node("Transpose", ["X"], ["Y"], perm=[0, 2, 3, 1], name="trans_1")
+ node1 = helper.make_node("Softmax", ["Y"], ["Z"], axis=3, name="softmax")
+ node2 = helper.make_node("Transpose", ["Z"], ["res"], perm=[0, 3, 1, 2], name="trans_2")
+
+ graph = helper.make_graph(
+ [node0, node1, node2],
+ "transpose-softmax-test",
+ [helper.make_tensor_value_info("X", TensorProto.FLOAT, input_shape)],
+ [helper.make_tensor_value_info("res", TensorProto.FLOAT, input_shape)],
+ )
+
+ model_proto = self.make_model(graph, producer_name="onnx-tests")
+ self.run_transpose_compare(
+ ["res"], {"X": np.random.randn(*input_shape).astype(np.float32)}, model_proto, remaining_transpose_num=0
+ )
+
+ @check_opset_max_version(
+ 12,
+ "Before opset 13, LogSoftmax coerced its inputs to 2D and can thus only be optimized for certain permutations",
+ )
+ def test_transpose_logsoftmax_valid_perm(self):
+ input_shape = [4, 4, 4, 4]
+ node0 = helper.make_node("Transpose", ["X"], ["Y"], perm=[0, 2, 3, 1], name="trans_1")
+ node1 = helper.make_node("LogSoftmax", ["Y"], ["Z"], axis=1, name="logsoftmax")
+ node2 = helper.make_node("Transpose", ["Z"], ["res"], perm=[0, 3, 1, 2], name="trans_2")
+
+ graph = helper.make_graph(
+ [node0, node1, node2],
+ "transpose-logsoftmax-test",
+ [helper.make_tensor_value_info("X", TensorProto.FLOAT, input_shape)],
+ [helper.make_tensor_value_info("res", TensorProto.FLOAT, input_shape)],
+ )
+
+ model_proto = self.make_model(graph, producer_name="onnx-tests")
+ self.run_transpose_compare(
+ ["res"], {"X": np.random.randn(*input_shape).astype(np.float32)}, model_proto, remaining_transpose_num=0
+ )
+
+ @check_opset_max_version(
+ 12,
+ "Before opset 13, LogSoftmax coerced its inputs to 2D and can thus only be optimized for certain permutations",
+ )
+ def test_transpose_logsoftmax_invalid_perm(self):
+ input_shape = [4, 4, 4, 4]
+ node0 = helper.make_node("Transpose", ["X"], ["Y"], perm=[0, 2, 3, 1], name="trans_1")
+ node1 = helper.make_node("LogSoftmax", ["Y"], ["Z"], axis=3, name="logsoftmax")
+ node2 = helper.make_node("Transpose", ["Z"], ["res"], perm=[0, 3, 1, 2], name="trans_2")
+
+ graph = helper.make_graph(
+ [node0, node1, node2],
+ "transpose-logsoftmax-test",
+ [helper.make_tensor_value_info("X", TensorProto.FLOAT, input_shape)],
+ [helper.make_tensor_value_info("res", TensorProto.FLOAT, input_shape)],
+ )
+
+ model_proto = self.make_model(graph, producer_name="onnx-tests")
+ self.run_transpose_compare(
+ ["res"], {"X": np.random.randn(*input_shape).astype(np.float32)}, model_proto, remaining_transpose_num=2
+ )
+
+ @check_opset_min_version(13, "LogSoftmax can be optimized for all permutations since opset 13")
+ def test_transpose_logsoftmax_13(self):
+ input_shape = [4, 4, 4, 4]
+ node0 = helper.make_node("Transpose", ["X"], ["Y"], perm=[0, 2, 3, 1], name="trans_1")
+ node1 = helper.make_node("LogSoftmax", ["Y"], ["Z"], axis=3, name="logsoftmax")
+ node2 = helper.make_node("Transpose", ["Z"], ["res"], perm=[0, 3, 1, 2], name="trans_2")
+
+ graph = helper.make_graph(
+ [node0, node1, node2],
+ "transpose-logsoftmax-test",
+ [helper.make_tensor_value_info("X", TensorProto.FLOAT, input_shape)],
+ [helper.make_tensor_value_info("res", TensorProto.FLOAT, input_shape)],
+ )
+
+ model_proto = self.make_model(graph, producer_name="onnx-tests")
+ self.run_transpose_compare(
+ ["res"], {"X": np.random.randn(*input_shape).astype(np.float32)}, model_proto, remaining_transpose_num=0
+ )
+
def test_transpose_tile(self):
input_shape = [1, 2, 3, 4]
@@ -1630,7 +1779,7 @@ def test_identity_in_subgraph_non_graph_output(self):
),
)
- cond_value = np.array(True, dtype=np.bool)
+ cond_value = np.array(True, dtype=bool)
node3 = helper.make_node(
'Constant',
inputs=[],
@@ -1639,7 +1788,7 @@ def test_identity_in_subgraph_non_graph_output(self):
name='cond_value',
data_type=TensorProto.BOOL,
dims=iter_num_value.shape,
- vals=cond_value.flatten().astype(np.bool).tolist(),
+ vals=cond_value.flatten().astype(bool).tolist(),
),
)
@@ -1778,7 +1927,7 @@ def test_duplicated_duplicated_constant_and_initializer(self):
model_proto = self.make_model(graph, producer_name="onnx-tests")
self.run_merge_duplicated_nodes_compare(["OUT"], {}, model_proto, op_type="Constant", remaining_op_num=0,
- graph_validator=lambda g: self._check_initializer_num(g, 2))
+ graph_validator=lambda g: self._check_initializer_num(g, 1))
def test_duplicated_node_is_graph_output(self):
node0 = helper.make_node('Add', inputs=["X", "X"], outputs=["value0"])
@@ -2032,7 +2181,7 @@ def test_const_fold_concat(self):
self.run_and_compare(["res"], {"inp": np.random.randn(6, 12).astype(np.float32)}, model_proto,
"Concat", 0)
- @check_opset_max_version(12, "Squeeze/Unsqueeze changed in opset 13")
+ @check_opset_max_version(12, "Squeeze/Unsqueeze changed since opset 13")
def test_const_fold_unsqueeze_with_const(self):
shape = (6, 6)
const_tensor = helper.make_tensor(name='const_tensor', data_type=TensorProto.FLOAT, dims=shape,
@@ -2052,7 +2201,7 @@ def test_const_fold_unsqueeze_with_const(self):
self.run_and_compare(["res"], {"X": np.random.randn(1).astype(np.float32)}, model_proto,
"Unsqueeze", 0)
- @check_opset_min_version(13, "Squeeze/Unsqueeze changed in opset 13")
+ @check_opset_min_version(13, "Squeeze/Unsqueeze changed since opset 13")
def test_const_fold_unsqueeze_with_const_13(self):
shape = (6, 6)
const_tensor = helper.make_tensor(name='const_tensor', data_type=TensorProto.FLOAT, dims=shape,
@@ -2092,6 +2241,72 @@ def test_const_fold_cast_with_const(self):
self.run_and_compare(["res"], {"X": np.random.randn(*shape).astype(np.int64)}, model_proto,
"Cast", 0)
+ def test_const_fold_add(self):
+ shape = (6, 6)
+ const_tensor1 = helper.make_tensor(name='const_tensor', data_type=TensorProto.FLOAT, dims=shape,
+ vals=np.random.randn(*shape).flatten().astype(np.float32))
+ const_tensor2 = helper.make_tensor(name='const_tensor', data_type=TensorProto.FLOAT, dims=shape,
+ vals=np.random.randn(*shape).flatten().astype(np.float32))
+ node1 = helper.make_node("Constant", [], ["const1"], value=const_tensor1)
+ node2 = helper.make_node("Constant", [], ["const2"], value=const_tensor2)
+ node3 = helper.make_node("Add", ["const1", "const2"], ["add"])
+ node4 = helper.make_node("Add", ["add", "X"], ["res"])
+
+ graph = helper.make_graph(
+ [node1, node2, node3, node4],
+ "test_const_fold_add",
+ [helper.make_tensor_value_info("X", TensorProto.FLOAT, shape)],
+ [helper.make_tensor_value_info("res", TensorProto.FLOAT, shape)],
+ )
+
+ model_proto = self.make_model(graph, producer_name="onnx-tests")
+ self.run_and_compare(["res"], {"X": np.random.randn(*shape).astype(np.float32)}, model_proto,
+ "Add", 1)
+
+ def test_const_fold_sub(self):
+ shape = (6, 6)
+ const_tensor1 = helper.make_tensor(name='const_tensor', data_type=TensorProto.FLOAT, dims=shape,
+ vals=np.random.randn(*shape).flatten().astype(np.float32))
+ const_tensor2 = helper.make_tensor(name='const_tensor', data_type=TensorProto.FLOAT, dims=shape,
+ vals=np.random.randn(*shape).flatten().astype(np.float32))
+ node1 = helper.make_node("Constant", [], ["const1"], value=const_tensor1)
+ node2 = helper.make_node("Constant", [], ["const2"], value=const_tensor2)
+ node3 = helper.make_node("Sub", ["const1", "const2"], ["sub"])
+ node4 = helper.make_node("Sub", ["sub", "X"], ["res"])
+
+ graph = helper.make_graph(
+ [node1, node2, node3, node4],
+ "test_const_fold_sub",
+ [helper.make_tensor_value_info("X", TensorProto.FLOAT, shape)],
+ [helper.make_tensor_value_info("res", TensorProto.FLOAT, shape)],
+ )
+
+ model_proto = self.make_model(graph, producer_name="onnx-tests")
+ self.run_and_compare(["res"], {"X": np.random.randn(*shape).astype(np.float32)}, model_proto,
+ "Sub", 1)
+
+ def test_const_fold_mul(self):
+ shape = (6, 6)
+ const_tensor1 = helper.make_tensor(name='const_tensor', data_type=TensorProto.FLOAT, dims=shape,
+ vals=np.random.randn(*shape).flatten().astype(np.float32))
+ const_tensor2 = helper.make_tensor(name='const_tensor', data_type=TensorProto.FLOAT, dims=shape,
+ vals=np.random.randn(*shape).flatten().astype(np.float32))
+ node1 = helper.make_node("Constant", [], ["const1"], value=const_tensor1)
+ node2 = helper.make_node("Constant", [], ["const2"], value=const_tensor2)
+ node3 = helper.make_node("Mul", ["const1", "const2"], ["mul"])
+ node4 = helper.make_node("Mul", ["mul", "X"], ["res"])
+
+ graph = helper.make_graph(
+ [node1, node2, node3, node4],
+ "test_const_fold_mul",
+ [helper.make_tensor_value_info("X", TensorProto.FLOAT, shape)],
+ [helper.make_tensor_value_info("res", TensorProto.FLOAT, shape)],
+ )
+
+ model_proto = self.make_model(graph, producer_name="onnx-tests")
+ self.run_and_compare(["res"], {"X": np.random.randn(*shape).astype(np.float32)}, model_proto,
+ "Mul", 1)
+
def test_const_fold_split(self):
shape = (2, 6, 1)
const_tensor = helper.make_tensor(name='const_tensor', data_type=TensorProto.FLOAT, dims=shape,
@@ -2130,7 +2345,7 @@ def test_const_fold_split_one(self):
self.run_and_compare(["out4"], {"inp": np.random.randn(2, 6, 1).astype(np.float32)}, model_proto,
"Split", 0)
- @check_opset_min_version(13, "Split changed in opset 13")
+ @check_opset_min_version(13, "Split changed since opset 13")
def test_const_fold_split_const_splits_13(self):
shape = (2, 6, 1)
const_tensor = helper.make_tensor(name='const_tensor', data_type=TensorProto.FLOAT, dims=shape,
@@ -2153,7 +2368,7 @@ def test_const_fold_split_const_splits_13(self):
self.run_and_compare(["out4"], {"inp": np.random.randn(2, 3, 1).astype(np.float32)}, model_proto,
"Split", 0)
- @check_opset_max_version(12, "Split changed in opset 13")
+ @check_opset_max_version(12, "Split changed since opset 13")
def test_const_fold_split_const_splits(self):
shape = (2, 6, 1)
const_tensor = helper.make_tensor(name='const_tensor', data_type=TensorProto.FLOAT, dims=shape,
diff --git a/tests/test_tfjs_runner.py b/tests/test_tfjs_runner.py
index b0a2fedba..0f282a4d2 100644
--- a/tests/test_tfjs_runner.py
+++ b/tests/test_tfjs_runner.py
@@ -17,7 +17,7 @@ class TestTfjsRunner(unittest.TestCase):
def test_tfjs_runner(self):
float_array = np.array([[1.1, 2.2], [3.3, 4.4]], np.float32)
int_array = np.array([[1, 2], [3, 4]], np.int32)
- bool_array = np.array([[True, False], [True, True]], np.bool)
+ bool_array = np.array([[True, False], [True, True]], bool)
string_array = np.array([['Hello world', ''], ['π', 'Tensor']], np.str)
complex_array = np.array([[1 + 0.1j, 2 + 0.2j], [3 + 0.3j, 4 + 0.4j]], np.complex64)
diff --git a/tf2onnx/constants.py b/tf2onnx/constants.py
index d5ce1fad4..c1314d0a8 100644
--- a/tf2onnx/constants.py
+++ b/tf2onnx/constants.py
@@ -15,8 +15,11 @@
MICROSOFT_DOMAIN = "com.microsoft"
CONTRIB_OPS_DOMAIN = "ai.onnx.contrib"
-# Default opset version for onnx domain
-PREFERRED_OPSET = 9
+# Default opset version for onnx domain.
+# The current update policy is that the default should be set to
+# the latest released version as of 18 months ago.
+# Opset 13 was released in ONNX v1.8.0 (Nov, 2020).
+PREFERRED_OPSET = 13
# Default opset for custom ops
TENSORFLOW_OPSET = helper.make_opsetid("ai.onnx.converters.tensorflow", 1)
@@ -50,6 +53,7 @@
# Mapping opset to IR version.
# Note: opset 7 and opset 8 came out with IR3 but we need IR4 because of PlaceholderWithDefault
+# Refer from https://github.com/onnx/onnx/blob/main/docs/Versioning.md#released-versions
OPSET_TO_IR_VERSION = {
- 1: 3, 2: 3, 3: 3, 4: 3, 5: 3, 6: 3, 7: 4, 8: 4, 9: 4, 10: 5, 11: 6, 12: 7, 13: 7, 14: 7, 15: 8
+ 1: 3, 2: 3, 3: 3, 4: 3, 5: 3, 6: 3, 7: 4, 8: 4, 9: 4, 10: 5, 11: 6, 12: 7, 13: 7, 14: 7, 15: 8, 16: 8, 17: 8
}
diff --git a/tf2onnx/convert.py b/tf2onnx/convert.py
index bdf7df58f..02a78e439 100644
--- a/tf2onnx/convert.py
+++ b/tf2onnx/convert.py
@@ -10,7 +10,7 @@
import argparse
import os
import sys
-from distutils.version import LooseVersion
+from packaging.version import Version
os.environ['TF_CPP_MIN_LOG_LEVEL'] = "3"
@@ -20,7 +20,7 @@
from tf2onnx import constants, logging, utils, optimizer
from tf2onnx import tf_loader
from tf2onnx.graph import ExternalTensorStorage
-from tf2onnx.tf_utils import compress_graph_def
+from tf2onnx.tf_utils import compress_graph_def, get_tf_version
@@ -86,11 +86,12 @@ def get_args():
# experimental
parser.add_argument("--inputs-as-nchw", help="transpose inputs as from nhwc to nchw")
+ parser.add_argument("--outputs-as-nchw", help="transpose outputs as from nhwc to nchw")
args = parser.parse_args()
args.shape_override = None
if args.input:
- # for backward compativility
+ # for backward compatibility
args.graphdef = args.input
if args.graphdef or args.checkpoint:
if not args.inputs or not args.outputs:
@@ -112,6 +113,8 @@ def get_args():
args.rename_inputs = args.rename_inputs.split(",")
if args.inputs_as_nchw:
args.inputs_as_nchw = args.inputs_as_nchw.split(",")
+ if args.outputs_as_nchw:
+ args.outputs_as_nchw = args.outputs_as_nchw.split(",")
if args.target:
args.target = args.target.split(",")
if args.signature_def:
@@ -275,6 +278,7 @@ def main():
input_names=inputs,
output_names=outputs,
inputs_as_nchw=args.inputs_as_nchw,
+ outputs_as_nchw=args.outputs_as_nchw,
large_model=args.large_model,
tensors_to_rename=tensors_to_rename,
ignore_default=args.ignore_default,
@@ -356,8 +360,8 @@ def _is_legacy_keras_model(model):
def _from_keras_tf1(model, opset=None, custom_ops=None, custom_op_handlers=None, custom_rewriter=None,
- inputs_as_nchw=None, extra_opset=None, shape_override=None, target=None,
- large_model=False, output_path=None):
+ inputs_as_nchw=None, outputs_as_nchw=None, extra_opset=None, shape_override=None,
+ target=None, large_model=False, output_path=None):
"""from_keras for tf 1.15"""
input_names = [t.name for t in model.inputs]
output_names = [t.name for t in model.outputs]
@@ -392,6 +396,7 @@ def _from_keras_tf1(model, opset=None, custom_ops=None, custom_op_handlers=None,
input_names=input_names,
output_names=output_names,
inputs_as_nchw=inputs_as_nchw,
+ outputs_as_nchw=outputs_as_nchw,
large_model=large_model,
tensors_to_rename=tensors_to_rename,
initialized_tables=initialized_tables,
@@ -401,7 +406,7 @@ def _from_keras_tf1(model, opset=None, custom_ops=None, custom_op_handlers=None,
def from_keras(model, input_signature=None, opset=None, custom_ops=None, custom_op_handlers=None,
- custom_rewriter=None, inputs_as_nchw=None, extra_opset=None, shape_override=None,
+ custom_rewriter=None, inputs_as_nchw=None, outputs_as_nchw=None, extra_opset=None, shape_override=None,
target=None, large_model=False, output_path=None, optimizers=None):
"""Returns a ONNX model_proto for a tf.keras model.
@@ -417,7 +422,8 @@ def from_keras(model, input_signature=None, opset=None, custom_ops=None, custom_
custom_rewriter: list of custom graph rewriters
extra_opset: list of extra opset's, for example the opset's used by custom ops
shape_override: dict with inputs that override the shapes given by tensorflow
- inputs_as_nchw: transpose inputs in list from nchw to nhwc
+ inputs_as_nchw: transpose inputs in list from nhwc to nchw
+ outputs_as_nchw: transpose outputs in list from nhwc to nchw
large_model: use the ONNX external tensor storage format
output_path: save model to output_path
optimizers: list (subset) of tf2onnx optimizers if applying all optimizers is not desired.
@@ -425,9 +431,9 @@ def from_keras(model, input_signature=None, opset=None, custom_ops=None, custom_
Returns:
An ONNX model_proto and an external_tensor_storage dict.
"""
- if LooseVersion(tf.__version__) < "2.0":
+ if get_tf_version() < Version("2.0"):
return _from_keras_tf1(model, opset, custom_ops, custom_op_handlers, custom_rewriter, inputs_as_nchw,
- extra_opset, shape_override, target, large_model, output_path)
+ outputs_as_nchw, extra_opset, shape_override, target, large_model, output_path)
old_out_names = _rename_duplicate_keras_model_names(model)
from tensorflow.python.keras.saving import saving_utils as _saving_utils # pylint: disable=import-outside-toplevel
@@ -500,6 +506,7 @@ def wrap_call(*args, training=False, **kwargs):
input_names=input_names,
output_names=output_names,
inputs_as_nchw=inputs_as_nchw,
+ outputs_as_nchw=outputs_as_nchw,
large_model=large_model,
tensors_to_rename=tensors_to_rename,
initialized_tables=initialized_tables,
@@ -509,8 +516,8 @@ def wrap_call(*args, training=False, **kwargs):
def from_function(function, input_signature=None, opset=None, custom_ops=None, custom_op_handlers=None,
- custom_rewriter=None, inputs_as_nchw=None, extra_opset=None, shape_override=None, target=None,
- large_model=False, output_path=None):
+ custom_rewriter=None, inputs_as_nchw=None, outputs_as_nchw=None, extra_opset=None,
+ shape_override=None, target=None, large_model=False, output_path=None):
"""Returns a ONNX model_proto for a tf.function.
Args:
@@ -525,17 +532,18 @@ def from_function(function, input_signature=None, opset=None, custom_ops=None, c
custom_rewriter: list of custom graph rewriters
extra_opset: list of extra opset's, for example the opset's used by custom ops
shape_override: dict with inputs that override the shapes given by tensorflow
- inputs_as_nchw: transpose inputs in list from nchw to nhwc
+ inputs_as_nchw: transpose inputs in list from nhwc to nchw
+ outputs_as_nchw: transpose outputs in list from nhwc to nchw
large_model: use the ONNX external tensor storage format
output_path: save model to output_path
Returns:
An ONNX model_proto and an external_tensor_storage dict.
"""
- if LooseVersion(tf.__version__) < "2.0":
+ if get_tf_version() < Version("2.0"):
raise NotImplementedError("from_function requires tf-2.0 or newer")
- if not input_signature:
+ if input_signature is None:
raise ValueError("from_function requires input_signature")
concrete_func = function.get_concrete_function(*input_signature)
@@ -564,6 +572,7 @@ def from_function(function, input_signature=None, opset=None, custom_ops=None, c
input_names=input_names,
output_names=output_names,
inputs_as_nchw=inputs_as_nchw,
+ outputs_as_nchw=outputs_as_nchw,
large_model=large_model,
tensors_to_rename=tensors_to_rename,
initialized_tables=initialized_tables,
@@ -573,8 +582,9 @@ def from_function(function, input_signature=None, opset=None, custom_ops=None, c
def from_graph_def(graph_def, name=None, input_names=None, output_names=None, opset=None, custom_ops=None,
- custom_op_handlers=None, custom_rewriter=None, inputs_as_nchw=None, extra_opset=None,
- shape_override=None, target=None, large_model=False, tensors_to_rename=None, output_path=None):
+ custom_op_handlers=None, custom_rewriter=None, inputs_as_nchw=None, outputs_as_nchw=None,
+ extra_opset=None, shape_override=None, target=None, large_model=False,
+ tensors_to_rename=None, output_path=None):
"""Returns a ONNX model_proto for a tensorflow graphdef.
Args:
@@ -591,7 +601,8 @@ def from_graph_def(graph_def, name=None, input_names=None, output_names=None, op
custom_rewriter: list of custom graph rewriters
extra_opset: list of extra opset's, for example the opset's used by custom ops
shape_override: dict with inputs that override the shapes given by tensorflow
- inputs_as_nchw: transpose inputs in list from nchw to nhwc
+ inputs_as_nchw: transpose inputs in list from nhwc to nchw
+ outputs_as_nchw: transpose outputs in list from nhwc to nchw
large_model: use the ONNX external tensor storage format
output_path: save model to output_path
@@ -628,6 +639,7 @@ def from_graph_def(graph_def, name=None, input_names=None, output_names=None, op
input_names=input_names,
output_names=output_names,
inputs_as_nchw=inputs_as_nchw,
+ outputs_as_nchw=outputs_as_nchw,
large_model=large_model,
tensors_to_rename=tensors_to_rename,
initialized_tables=initialized_tables,
@@ -636,5 +648,59 @@ def from_graph_def(graph_def, name=None, input_names=None, output_names=None, op
return model_proto, external_tensor_storage
+def from_tflite(tflite_path, input_names=None, output_names=None, opset=None, custom_ops=None, custom_op_handlers=None,
+ custom_rewriter=None, inputs_as_nchw=None, outputs_as_nchw=None, extra_opset=None, shape_override=None,
+ target=None, large_model=False, output_path=None):
+ """Returns a ONNX model_proto for a tflite model file.
+
+ Args:
+ tflite_path: the tflite model file full path
+ input_names: list of input names
+ output_names: list of output names
+ opset: the opset to be used for the ONNX model, default is the latest
+ custom_ops: if a model contains ops not recognized by onnx runtime,
+ you can tag these ops with a custom op domain so that the
+ runtime can still open the model. Type is a dictionary `{op name: domain}`.
+ custom_op_handlers: dictionary of custom ops handlers
+ custom_rewriter: list of custom graph rewriters
+ inputs_as_nchw: transpose inputs in list from nhwc to nchw
+ outputs_as_nchw: transpose outputs in list from nhwc to nchw
+ extra_opset: list of extra opset's, for example the opset's used by custom ops
+ shape_override: dict with inputs that override the shapes given by tensorflow
+ target: list of workarounds applied to help certain platforms
+ large_model: use the ONNX external tensor storage format
+ output_path: save model to output_path
+
+ Returns:
+ An ONNX model_proto and an external_tensor_storage dict.
+ """
+ if not tflite_path:
+ raise ValueError("tflite_path needs to be provided")
+
+ with tf.device("/cpu:0"):
+ model_proto, external_tensor_storage = _convert_common(
+ None,
+ tflite_path=tflite_path,
+ name=os.path.splitext(os.path.basename(tflite_path))[0],
+ continue_on_error=True,
+ target=target,
+ opset=opset,
+ custom_ops=custom_ops,
+ custom_op_handlers=custom_op_handlers,
+ custom_rewriter=custom_rewriter,
+ extra_opset=extra_opset,
+ shape_override=shape_override,
+ input_names=input_names,
+ output_names=output_names,
+ inputs_as_nchw=inputs_as_nchw,
+ outputs_as_nchw=outputs_as_nchw,
+ large_model=large_model,
+ tensors_to_rename=None,
+ initialized_tables=None,
+ output_path=output_path)
+
+ return model_proto, external_tensor_storage
+
+
if __name__ == "__main__":
main()
diff --git a/tf2onnx/custom_opsets/string_ops.py b/tf2onnx/custom_opsets/string_ops.py
index 24b854bc3..303fcd94b 100644
--- a/tf2onnx/custom_opsets/string_ops.py
+++ b/tf2onnx/custom_opsets/string_ops.py
@@ -30,7 +30,7 @@ def version_1(cls, ctx, node, **kwargs):
del node.attr[a]
unsqueeze_node = GraphBuilder(ctx).make_unsqueeze({'data': node.input[1], 'axes': [0]}, return_node=True)
- skip_empty_const = ctx.make_const(utils.make_name('skip_empty_const'), np.array([skip_empty], np.bool))
+ skip_empty_const = ctx.make_const(utils.make_name('skip_empty_const'), np.array([skip_empty], bool))
ctx.replace_inputs(node, [node.input[0], unsqueeze_node.output[0], skip_empty_const.output[0]])
@tf_op("StringToHashBucketFast", domain=constants.CONTRIB_OPS_DOMAIN)
@@ -53,8 +53,8 @@ def version_1(cls, ctx, node, **kwargs):
rewrite = node.get_attr_str("rewrite")
utils.make_sure(node.get_attr_value("replace_global") != 0,
"Can not convert StaticRegexReplace if replace_global is False")
- pattern_node = ctx.make_const(utils.make_name("pattern"), np.array([pattern], np.object))
- rewrite_node = ctx.make_const(utils.make_name("rewrite"), np.array([rewrite], np.object))
+ pattern_node = ctx.make_const(utils.make_name("pattern"), np.array([pattern], object))
+ rewrite_node = ctx.make_const(utils.make_name("rewrite"), np.array([rewrite], object))
del node.attr["pattern"]
del node.attr["rewrite"]
del node.attr["replace_global"]
@@ -69,7 +69,7 @@ def version_1(cls, ctx, node, **kwargs):
if separator is None:
separator = b''
separator = separator.decode('UTF-8')
- separator_node = ctx.make_const(utils.make_name("separator"), np.array([separator], np.object))
+ separator_node = ctx.make_const(utils.make_name("separator"), np.array([separator], object))
axis_node = ctx.make_const(utils.make_name("axis"), np.array([0], np.int64))
inps_with_shapes = [i for i in node.input if ctx.get_shape(i) != []]
shape_node = None
diff --git a/tf2onnx/graph.py b/tf2onnx/graph.py
index 18a16269b..82c93c695 100644
--- a/tf2onnx/graph.py
+++ b/tf2onnx/graph.py
@@ -582,7 +582,7 @@ def make_const(self, name, np_val, skip_conversion=False, raw=True):
raw: whether to store data at field of raw_data or the specific field according to its dtype
"""
np_val_flat = np_val.flatten()
- is_bytes = np_val.dtype == np.object and len(np_val_flat) > 0 and isinstance(np_val_flat[0], bytes)
+ is_bytes = np_val.dtype == object and len(np_val_flat) > 0 and isinstance(np_val_flat[0], bytes)
if raw and not is_bytes:
onnx_tensor = numpy_helper.from_array(np_val, name)
else:
@@ -751,10 +751,10 @@ def reset_nodes(self, ops):
for n in self.inputs:
if n not in ops:
- raise ValueError("graph input " + n + " not exist")
+ raise ValueError("graph input '" + n.name + "' not exist")
for o in self.outputs:
if o not in self._output_to_node_name:
- raise ValueError("graph output " + o + " not exist")
+ raise ValueError("graph output '" + o.name + "' not exist")
self._dtypes = remained_dtypes
self._output_shapes = remained_shapes
@@ -1791,9 +1791,11 @@ def _parse_graph_input(g, graph_proto, const_node_names):
# because for subgraphs, the input orders matter.
for graph_input in graph_proto.input:
name = graph_input.name
- shape = shapes[name]
- dtype = dtypes[name]
- if name not in const_node_names:
- g.add_graph_input(name, dtype, shape)
- else:
- g.add_graph_input_with_default(name, g.get_node_by_name(name), dtype, shape)
+ const_initializer_node = g.get_node_by_output_in_current_graph(name)
+ if const_initializer_node is None: # is actual input rather than initializer
+ shape = shapes[name]
+ dtype = dtypes[name]
+ if name not in const_node_names:
+ g.add_graph_input(name, dtype, shape)
+ else:
+ g.add_graph_input_with_default(name, g.get_node_by_name(name), dtype, shape)
diff --git a/tf2onnx/onnx_opset/controlflow.py b/tf2onnx/onnx_opset/controlflow.py
index b6dd5a14b..b244bd3f1 100644
--- a/tf2onnx/onnx_opset/controlflow.py
+++ b/tf2onnx/onnx_opset/controlflow.py
@@ -381,29 +381,32 @@ def version_7(cls, ctx, node, **kwargs):
# may be removed from output_names below
output_names = node.output.copy()
- # Make maximum_iterations int64 and replace -1(tf) with maxsize(onnx). If the const node has no other
+ # Make maximum_iterations int64. If the const node has no other
# consumers, modify it in place. Otherwise, make a new const node and leave the original unchanged.
# if maximum_iterations is not const,should add an cast node(cast to int64)
maximum_iterations_name = node.input[1]
if node.inputs[1].is_const():
maximum_iterations = node.inputs[1].get_tensor_value()
- if maximum_iterations == -1:
- maximum_iterations = np.iinfo(np.int64).max
- consumers = ctx.find_output_consumers(maximum_iterations_name)
- external_consumers = [c for c in consumers if c != node and c.type != 'TensorListReserve']
- if len(external_consumers) == 0:
- ctx.remove_node(node.inputs[1].name)
+ # maximum_iterations with -1(tf) means it doesn't set the maximum count.
+ # For onnx Loop op optional input `M`(int64), represents a maximum trip-count. Set empty string to skip.
+ if maximum_iterations != -1:
+ consumers = ctx.find_output_consumers(maximum_iterations_name)
+ external_consumers = [c for c in consumers if c != node and c.type != 'TensorListReserve']
+ if len(external_consumers) == 0:
+ ctx.remove_node(node.inputs[1].name)
+ else:
+ maximum_iterations_name = utils.make_name(node.inputs[1].name)
+ ctx.make_const(maximum_iterations_name, np.array(maximum_iterations, dtype=np.int64))
+ ctx.replace_input(node, node.input[1], maximum_iterations_name, 1)
+ maximum_iterations_m = maximum_iterations_name
else:
- maximum_iterations_name = utils.make_name(node.inputs[1].name)
- ctx.make_const(maximum_iterations_name, np.array(maximum_iterations, dtype=np.int64))
- ctx.replace_input(node, node.input[1], maximum_iterations_name, 1)
- maximum_iterations_int64 = maximum_iterations_name
+ maximum_iterations_m = ""
else:
cast_inputs = [maximum_iterations_name]
attr = {"to": onnx_pb.TensorProto.INT64}
cast_name = node.name + "_cast"
cast_node = ctx.make_node("Cast", cast_inputs, attr, name=cast_name)
- maximum_iterations_int64 = cast_node.output[0]
+ maximum_iterations_m = cast_node.output[0]
cond_name = node.get_attr_str("cond")
cond_graph = find_function(cond_name)
@@ -427,7 +430,7 @@ def version_7(cls, ctx, node, **kwargs):
cond_input_to_state_var[cond_graph.input_names[idx]] = maximum_iterations_name
continue
if idx < 2:
- # skip [0,1] loop_counter, max_iterations
+ # skip [0,1] loop_counter, max_iterations
continue
n = node.inputs[idx]
if n.type in ["TensorListReserve", "TensorListResize"]:
@@ -511,7 +514,7 @@ def version_7(cls, ctx, node, **kwargs):
output_names = output_names[2:]
branches = {"body": body}
- loop_node = ctx.make_node("Loop", [maximum_iterations_int64, cond_outputs[0]] + loop_vars,
+ loop_node = ctx.make_node("Loop", [maximum_iterations_m, cond_outputs[0]] + loop_vars,
output_count=len(output_shapes), name=node.name + "_loop",
shapes=output_shapes, dtypes=output_dtypes, skip_conversion=True,
branches=branches)
diff --git a/tf2onnx/onnx_opset/generator.py b/tf2onnx/onnx_opset/generator.py
index 90eb1b62c..0b59dca6b 100644
--- a/tf2onnx/onnx_opset/generator.py
+++ b/tf2onnx/onnx_opset/generator.py
@@ -8,7 +8,7 @@
import logging
import numpy as np
-from onnx import onnx_pb, numpy_helper
+from onnx import onnx_pb, numpy_helper, helper
from tf2onnx import utils
from tf2onnx.handler import tf_op
from tf2onnx.graph_builder import GraphBuilder
@@ -242,6 +242,17 @@ def version_1(cls, ctx, node, **kwargs):
name=node.name, outputs=node.output,
shapes=shapes, dtypes=dtypes)
+ @classmethod
+ def version_9(cls, ctx, node, **kwargs):
+ dtypes = node.output_dtypes
+ ctx.remove_node(node.name)
+ shape = ctx.make_node("Shape", node.input).output[0]
+ zero_tensor = helper.make_tensor("value", dtypes[0], [1], vals=[0])
+ ctx.make_node("ConstantOfShape", inputs=[shape],
+ attr={'value': zero_tensor},
+ name=node.name, outputs=node.output,
+ dtypes=dtypes)
+
@tf_op(["IteratorV2", "FIFOQueueV2"])
class Iterator:
diff --git a/tf2onnx/onnx_opset/math.py b/tf2onnx/onnx_opset/math.py
index b726e96a2..bdebd1fa6 100644
--- a/tf2onnx/onnx_opset/math.py
+++ b/tf2onnx/onnx_opset/math.py
@@ -826,3 +826,15 @@ class HardSwish:
@classmethod
def version_14(cls, ctx, node, **kwargs):
pass
+
+
+@tf_op(["L2Normalization"], onnx_op="LpNormalization")
+class L2Normalization:
+ @classmethod
+ def version_1(cls, ctx, node, **kwargs):
+ axis = node.get_attr_value("axis")
+ if axis is None:
+ # by default use the last dim
+ axis = -1
+ node.set_attr("axis", axis)
+ node.set_attr("p", 2)
diff --git a/tf2onnx/onnx_opset/nn.py b/tf2onnx/onnx_opset/nn.py
index 5ff8567d3..adde19641 100644
--- a/tf2onnx/onnx_opset/nn.py
+++ b/tf2onnx/onnx_opset/nn.py
@@ -853,7 +853,7 @@ def convert_symmetric_pads(cls, ctx, node):
output = node.output[0]
shape = ctx.make_node("Shape", [output]).output[0]
dims = ctx.make_node("Split", [shape], output_count=rank).output
- two_false = ctx.make_const(utils.make_name("two_false"), np.array([False, False], np.bool)).output[0]
+ two_false = ctx.make_const(utils.make_name("two_false"), np.array([False, False], bool)).output[0]
inv_second = ctx.make_const(utils.make_name("inv_second"), np.array([1, -1], np.int64)).output[0]
dec_second = ctx.make_const(utils.make_name("dec_second"), np.array([0, 1], np.int64)).output[0]
for a in non_zero_axes:
@@ -1325,7 +1325,7 @@ def any_version_after11(cls, opset, ctx, node, **kwargs):
g.add_graph_output(cond_out_name, TensorProto.BOOL, [])
g.add_graph_output(squeeze_x.output[0], ctx.get_dtype(node.input[0]), [-1, -1, -1])
trip_node = ctx.make_node("Size", [box_ind])
- cond_const = ctx.make_const(utils.make_name("cond"), np.ones((), dtype=np.bool))
+ cond_const = ctx.make_const(utils.make_name("cond"), np.ones((), dtype=bool))
ctx.remove_node(node.name)
branches = {"body": g}
inner_loop = ctx.make_node("Loop", [trip_node.output[0], cond_const.output[0]], name=node.name,
@@ -1638,7 +1638,7 @@ def version_7(cls, ctx, node, **kwargs):
# 2: "loop" to generate mask matrix: generate col or row of matrix one by one
g = ctx.create_new_graph_with_same_config()
node_name = utils.make_name("const_zero_bool")
- const_zero_bool = g.make_const(name=node_name, np_val=np.array([[0]]).astype(np.bool))
+ const_zero_bool = g.make_const(name=node_name, np_val=np.array([[0]]).astype(bool))
g.set_dtype(const_zero_bool.output[0], onnx_pb.TensorProto.BOOL)
g.add_graph_input("trip", onnx_pb.TensorProto.INT64, [])
@@ -1668,7 +1668,7 @@ def version_7(cls, ctx, node, **kwargs):
line_num = ctx.make_node(op_type="Gather", inputs=[shape.output[0], col_or_row_num_index.output[0]])
trip_cnt = line_num.output[0]
node_name = utils.make_name("true")
- cond = ctx.make_const(name=node_name, np_val=np.array(1).astype(np.bool))
+ cond = ctx.make_const(name=node_name, np_val=np.array(1).astype(bool))
col_init = one_line.output[0]
branches = {"body": g}
diff --git a/tf2onnx/onnx_opset/tensor.py b/tf2onnx/onnx_opset/tensor.py
index 49564223e..044a253d4 100644
--- a/tf2onnx/onnx_opset/tensor.py
+++ b/tf2onnx/onnx_opset/tensor.py
@@ -497,7 +497,7 @@ def _make_gathernd_inner_loop(ctx, params, index, dtype):
# gather_res = gather(gather_cur, index[i])
scope_name = utils.make_name("gathernd_inner_loop")
trip_node = ctx.make_node("Size", [index.output[0]])
- cond_const = ctx.make_const(utils.make_name("cond"), np.ones((), dtype=np.bool))
+ cond_const = ctx.make_const(utils.make_name("cond"), np.ones((), dtype=bool))
trip_name = utils.make_name("i")
cond_name = utils.make_name("cond")
cond_out_name = utils.make_name("cond_out")
@@ -548,7 +548,7 @@ def make_gathernd(ctx, params, indices, output, scope_name, t_params, shapes, dt
# outter loop for each index
# for (int i=0; i> bit) & 1
+ ellipsis_flag = (ellipsis_mask >> bit) & 1
+ num_new += not ellipsis_flag and new_axis_flag
+
+ for bit in range(32):
+ if (ellipsis_mask >> bit) & 1:
+ ellipsis_gap = len(ctx.get_shape(input_x)) - param_rank + num_new + 1
+ elif (new_axis_mask >> bit) & 1:
+ effective_bit = bit if not ellipsis_gap else bit + ellipsis_gap - 1
+ unqueeze_at.append(effective_bit)
+ begin_mask |= 1 << bit
+ end_mask |= 1 << bit
+
input_x = GraphBuilder(ctx).make_unsqueeze(
{'data': input_x, 'axes': unqueeze_at})
@@ -2797,7 +2830,7 @@ def cum_prod_of_vector(vector):
shape = ctx.get_shape(vector)
rank = shape[0] if shape is not None else -1
if rank != -1:
- lower_tri = np.tri(rank, rank, dtype=np.bool)
+ lower_tri = np.tri(rank, rank, dtype=bool)
lower_triangular_bool = ctx.make_const(utils.make_name("lower_tri_const"), lower_tri).output[0]
else:
rank = ctx.make_node("Shape", [vector]).output[0]
@@ -3273,7 +3306,7 @@ def normalize():
body_graph.add_graph_output(padded_output.output[0], ctx.get_dtype(node.input[0]), per_loop_shape)
body_graph.add_graph_output(gap_k.output[0], TensorProto.INT64, [-1])
# make loop
- cond_const = ctx.make_const(utils.make_name("cond"), np.ones((), dtype=np.bool))
+ cond_const = ctx.make_const(utils.make_name("cond"), np.ones((), dtype=bool))
branches = {"body": body_graph}
main_loop = ctx.make_node('Loop', [total_k.output[0], cond_const.output[0]], output_count=2, branches=branches)
# reshape output
diff --git a/tf2onnx/optimizer/const_fold_optimizer.py b/tf2onnx/optimizer/const_fold_optimizer.py
index cc806f4a0..81f479ca3 100644
--- a/tf2onnx/optimizer/const_fold_optimizer.py
+++ b/tf2onnx/optimizer/const_fold_optimizer.py
@@ -162,6 +162,30 @@ def _fold_unsqueeze(node, graph):
const_val_after_unsqueeze = const_val.reshape(shape_out)
return [const_val_after_unsqueeze]
+ @staticmethod
+ @_register_func("Mul")
+ def _fold_mul(node, graph):
+ const_val1 = node.inputs[0].get_tensor_value(as_list=False)
+ const_val2 = node.inputs[1].get_tensor_value(as_list=False)
+ const_val_after_nul = np.multiply(const_val1, const_val2)
+ return [const_val_after_nul]
+
+ @staticmethod
+ @_register_func("Add")
+ def _fold_add(node, graph):
+ const_val1 = node.inputs[0].get_tensor_value(as_list=False)
+ const_val2 = node.inputs[1].get_tensor_value(as_list=False)
+ const_val_after_add = np.add(const_val1, const_val2)
+ return [const_val_after_add]
+
+ @staticmethod
+ @_register_func("Sub")
+ def _fold_sub(node, graph):
+ const_val1 = node.inputs[0].get_tensor_value(as_list=False)
+ const_val2 = node.inputs[1].get_tensor_value(as_list=False)
+ const_val_after_sub = np.subtract(const_val1, const_val2)
+ return [const_val_after_sub]
+
@staticmethod
@_register_func("Split")
def _fold_split(node, graph):
diff --git a/tf2onnx/optimizer/reshape_optimizer.py b/tf2onnx/optimizer/reshape_optimizer.py
index 9eac9929c..f2ff6aa72 100644
--- a/tf2onnx/optimizer/reshape_optimizer.py
+++ b/tf2onnx/optimizer/reshape_optimizer.py
@@ -54,7 +54,7 @@ def _optimize_reshape(self, node, graph):
symbolic_shape.append(SymbolicTensorElement.from_variable(i))
else:
symbolic_shape.append(SymbolicTensorElement.from_const(d))
- feed_dict[n.output[0]] = np.array(symbolic_shape, np.object)
+ feed_dict[n.output[0]] = np.array(symbolic_shape, object)
try:
symbolic_res = SymbolicExecutor(graph).compute_outputs([node.input[1]], feed_dict)
except SymbolicExecutionException:
diff --git a/tf2onnx/optimizer/transpose_optimizer.py b/tf2onnx/optimizer/transpose_optimizer.py
index 0c55d039e..1a82a11fa 100644
--- a/tf2onnx/optimizer/transpose_optimizer.py
+++ b/tf2onnx/optimizer/transpose_optimizer.py
@@ -205,6 +205,7 @@ def _initialize_handlers(self):
"Identity": self._identity_handler,
"LeakyRelu": self._simple_through_handler,
"Log": self._simple_through_handler,
+ "LogSoftmax": self._softmax_handler,
"Max": self._maxmin_handler,
"Min": self._maxmin_handler,
"Mul": self._mul_handler,
@@ -223,6 +224,7 @@ def _initialize_handlers(self):
"Relu": self._simple_through_handler,
"Shape": self._shape_handler,
"Sigmoid": self._simple_through_handler,
+ "Softmax": self._softmax_handler,
"Sum": self._sum_handler,
"Slice": self._slice_handler,
"Split": self._split_handler,
@@ -669,11 +671,19 @@ def _concat_handler(self, trans, node):
def _split_handler(self, trans, node):
# Todo: need handle cases where Split node has more than 1 outputs.
+ split = None
+ if self._g.opset >= 13 and len(node.input) > 1 and node.inputs[1].is_const():
+ # split is an input not attr since opset 13
+ split = node.inputs[1].get_tensor_value(as_list=True)
if self._handle_node_having_branches(trans, node):
perm = trans.get_attr_value("perm")
axis = node.get_attr_value("axis", 0)
new_axis = perm[axis]
node.set_attr("axis", new_axis)
+ if split:
+ new_axes_np = np.array(split, dtype=np.int64)
+ new_axes_const = self._g.make_const(utils.make_name(node.inputs[1].name), new_axes_np)
+ self._g.replace_inputs(node, [node.input[0], new_axes_const.output[0]])
return True
return False
@@ -745,7 +755,7 @@ def _calculate_new_attr(ori_perm, ori_squeeze_axes):
shape_after_trans = [input_shape[i] for i in ori_perm]
output_shape = [shape_after_trans[i] for i in range(n) if i not in ori_squeeze_axes]
# calculate new_perm
- # after switch, the output shape should be same, using this condtion we can figure the new perm
+ # after switch, the output shape should be same, using this condition we can figure the new perm
shape_after_squeeze = [input_shape[i] for i in range(n) if i not in new_squeeze_axes]
new_perm = [shape_after_squeeze.index(i) for i in output_shape]
@@ -755,7 +765,7 @@ def _calculate_new_attr(ori_perm, ori_squeeze_axes):
return False
axes = None
- # in opset 13, axes is an input not attr
+ # axes is an input not attr since opset 13
if node.get_attr("axes"):
axes = node.get_attr("axes").ints
if len(node.input) > 1 and node.inputs[1].is_const():
@@ -827,6 +837,28 @@ def permute_pads(pads):
def _prelu_handler(self, trans, node):
return self._handle_node_having_branches(trans, node)
+ def _softmax_handler(self, trans, node):
+ trans_rank = get_transpose_rank(trans)
+ perm = trans.get_attr("perm").ints
+
+ if self._g.opset >= 13:
+ # Softmax operates on an arbitrary axis since opset 13
+ axis = node.get_attr_value("axis", -1)
+ new_axis = perm[axis + trans_rank if axis < 0 else axis]
+ if not self._switch_transpose_and_node(node, trans):
+ return False
+ node.set_attr("axis", new_axis)
+ return True
+
+ # For older opsets, the "axis" attribute determines the coercion point for coercing the input tensor to 2D.
+ # We can safely switch transpose and node if the permutation does not make any axes cross that boundary.
+ coercion_axis = node.get_attr_value("axis", 1)
+ for from_axis, to_axis in enumerate(perm):
+ if (from_axis < coercion_axis <= to_axis) or (from_axis >= coercion_axis > to_axis):
+ return False
+
+ return self._switch_transpose_and_node(node, trans)
+
def _arg_min_max_handler(self, trans, node):
axis = node.get_attr_value("axis", 0)
node.set_attr("axes", [axis])
diff --git a/tf2onnx/rewriter/lstm_tf2_rewriter.py b/tf2onnx/rewriter/lstm_tf2_rewriter.py
index 414bf6c98..845bb2a84 100644
--- a/tf2onnx/rewriter/lstm_tf2_rewriter.py
+++ b/tf2onnx/rewriter/lstm_tf2_rewriter.py
@@ -16,29 +16,52 @@
# pylint: disable=invalid-name,unused-argument,missing-docstring, unused-variable
+def _make_lstm_pattern_from_params(params):
+ return make_lstm_pattern(enter_or_id="Identity") if not params.get("from_keras", False) \
+ else make_lstm_pattern(
+ from_keras=True,
+ use_bias=params.get("use_bias", False),
+ activation=params.get("activation", ""),
+ recurrent_activation=params.get("recurrent_activation", "")
+ )
def rewriter_lstm_tf2(g, ops):
-
- pattern1 = make_lstm_pattern(enter_or_id="Identity") # TF LSTM
- pattern2 = make_lstm_pattern(from_keras=True, use_bias=False) # keras LSTM
- pattern3 = make_lstm_pattern(from_keras=True, use_bias=True) # keras LSTM with bias
-
- for pattern in [pattern1, pattern2, pattern3]:
+ lstm_params_variations = [
+ # default activations
+ {"enter_or_id": "Identity"}, # TF LSTM
+ {"from_keras": True, "use_bias": False}, # keras LSTM
+ {"from_keras": True, "use_bias": True}, # keras LSTM with bias
+ # hard sigmoid as recurrent activation
+ {"from_keras": True, "use_bias": False, "recurrent_activation": "hard_sigmoid"}, # keras LSTM
+ {"from_keras": True, "use_bias": True, "recurrent_activation": "hard_sigmoid"} # keras LSTM with bias
+ # Note: add other LSTM variations as needed
+ ]
+ for params in lstm_params_variations:
+ pattern = _make_lstm_pattern_from_params(params)
matcher = GraphMatcher(pattern, allow_reorder=False)
match_results = list(matcher.match_ops(ops))
for match_result in match_results:
- from_keras = pattern != pattern1
+ is_ft_hard_sigmoid = params.get("recurrent_activation", "") == "hard_sigmoid"
+ recurrent_activation_f = "HardSigmoid" if is_ft_hard_sigmoid else \
+ match_result.get_op("ft").type
+ activation_g = match_result.get_op("gt").type
+ activation_h = match_result.get_op("ct'").type
+
+ default_activations = ["Relu", "Sigmoid", "Tanh"]
+ if ((activation_g not in default_activations) or
+ (activation_h not in default_activations) or
+ (not is_ft_hard_sigmoid and recurrent_activation_f not in default_activations)):
+ continue
+
activations_fgh = [
- match_result.get_op("ft").type,
- match_result.get_op("gt").type,
- match_result.get_op("ct'").type
+ recurrent_activation_f,
+ activation_g,
+ activation_h
]
- supported_activations = ['Relu', 'Sigmoid', 'Tanh']
- if any(f not in supported_activations for f in activations_fgh):
- continue
# extract input x_t
+ from_keras = params.get("from_keras", False)
if from_keras:
get_item = match_result.get_op("xt")
else:
@@ -134,7 +157,7 @@ def has_tensor_list_consumer(n):
# Wb and Rb are concatenated
b_idx = None
- if pattern is pattern3:
+ if from_keras and params.get("use_bias", False):
bias_add = match_result.get_op("bias_add")
if bias_add is not None and bias_add.data_format != "NHWC":
continue
diff --git a/tf2onnx/rewriter/random_normal_rewriter.py b/tf2onnx/rewriter/random_normal_rewriter.py
index 6d106907e..3691b5f00 100644
--- a/tf2onnx/rewriter/random_normal_rewriter.py
+++ b/tf2onnx/rewriter/random_normal_rewriter.py
@@ -33,16 +33,22 @@ def rewrite_random_normal(g, ops):
match_results = list(matcher.match_ops(ops))
for match in match_results:
output = match.get_op('output')
- if output.type == 'Add':
+ input2 = match.get_op('input2')
+ is_output = False
+ for output_name in g.outputs:
+ # input2 and output can not be output node.
+ if input2.name in output_name or output.name in output_name:
+ is_output = True
+ break
+ if is_output:
+ continue
+ if output.type == 'Add' and input2.type == 'Mul':
# pattern 1
mean = output.inputs[1].get_tensor_value()
+ scale = input2.inputs[1].get_tensor_value()
else:
# pattern 2
mean = 0.0
- input2 = match.get_op('input2')
- if input2.type == 'Mul':
- scale = input2.inputs[1].get_tensor_value()
- else:
scale = 1.0
dtype = g.get_dtype(output.output[0])
op_name = utils.make_name("RandomNormal")
diff --git a/tf2onnx/rewriter/rnn_utils.py b/tf2onnx/rewriter/rnn_utils.py
index 4e3912004..b94ef3ffb 100644
--- a/tf2onnx/rewriter/rnn_utils.py
+++ b/tf2onnx/rewriter/rnn_utils.py
@@ -30,6 +30,25 @@ class REWRITER_RESULT(Enum):
# TensorFlow LSTMCell/BasicLSTMCell and Keras LSTM computation graph matching
+def insert_activation(activation, name="", inputs=None):
+ inputs = inputs if inputs else [] # to avoid empty list as default arg
+ if activation == "hard_sigmoid":
+ return OpTypePattern("Maximum", inputs=[
+ OpTypePattern("Minimum", inputs=[
+ OpTypePattern("Add|AddV2", inputs=[
+ OpTypePattern("Mul", inputs=[
+ *inputs,
+ OpTypePattern("*") # mul(x, 0.2)
+ ]), OpTypePattern("*") # add(x, 0.5)
+ ]), OpTypePattern("*") # minimum(x, 1)
+ ]), OpTypePattern("*") # maximum(x, 0)
+ ])
+ # Additional activation pattern can be added when needed:
+ # https://www.tensorflow.org/api_docs/python/tf/keras/activations
+ # otherwise, use default activations
+ return OpTypePattern("Tanh|Relu|Sigmoid", name=name, inputs=inputs)
+
+
def make_lstm_xc_pattern(enter_or_id="Enter", from_keras=False, use_bias=False):
if from_keras:
lstm_xh_pattern = OpTypePattern("Add|AddV2", allow_reorder=False, inputs=[
@@ -63,7 +82,8 @@ def make_lstm_xc_pattern(enter_or_id="Enter", from_keras=False, use_bias=False):
])
-def make_lstm_pattern(enter_or_id="Enter", from_keras=False, use_bias=False):
+def make_lstm_pattern(enter_or_id="Enter", from_keras=False, use_bias=False,
+ activation="", recurrent_activation=""):
# split (Xt*(W[ifco]^T) + Ht-1*(R[ifco]^T)) on 'Const' axis
lstm_xc_pattern = OpTypePattern('Split', inputs=[
OpTypePattern("Const"),
@@ -77,23 +97,21 @@ def make_lstm_pattern(enter_or_id="Enter", from_keras=False, use_bias=False):
OpTypePattern("*", name="ft_bias"),
])
- activation = "Tanh|Relu|Sigmoid"
- recurrent_activation = "Tanh|Relu|Sigmoid"
-
- return OpTypePattern("Mul", name='ht', inputs=[
- OpTypePattern(recurrent_activation, name="ot", inputs=[lstm_xc_pattern]),
- OpTypePattern(activation, name="ct'", inputs=[
- OpTypePattern("Add|AddV2", name="ct", inputs=[
- OpTypePattern("Mul", name="ct_identity_consumer", inputs=[
- OpTypePattern(recurrent_activation, name="ft", inputs=[lstm_fb_pattern]),
- OpTypePattern("*", name="c"),
- ]),
- OpTypePattern("Mul", inputs=[
- OpTypePattern(recurrent_activation, name="it", inputs=[lstm_xc_pattern]),
- OpTypePattern(activation, name="gt", inputs=[lstm_xc_pattern]),
- ]),
- ]),
+ # cell state
+ lstm_ct_pattern = OpTypePattern("Add|AddV2", name="ct", inputs=[
+ OpTypePattern("Mul", name="ct_identity_consumer", inputs=[
+ insert_activation(recurrent_activation, name="ft", inputs=[lstm_fb_pattern]),
+ OpTypePattern("*", name="c"),
]),
+ OpTypePattern("Mul", inputs=[
+ insert_activation(recurrent_activation, name="it", inputs=[lstm_xc_pattern]),
+ insert_activation(activation, name="gt", inputs=[lstm_xc_pattern]),
+ ]),
+ ])
+
+ return OpTypePattern("Mul", name="ht", inputs=[
+ insert_activation(recurrent_activation, name="ot", inputs=[lstm_xc_pattern]),
+ insert_activation(activation, name="ct'", inputs=[lstm_ct_pattern]),
])
lstmcell_pattern = make_lstm_pattern()
diff --git a/tf2onnx/shape_inference.py b/tf2onnx/shape_inference.py
index 9a28975b5..853d4835c 100644
--- a/tf2onnx/shape_inference.py
+++ b/tf2onnx/shape_inference.py
@@ -6,9 +6,9 @@
"""
import logging
-from distutils.version import LooseVersion
from collections import defaultdict
import numpy as np
+from packaging.version import Version
from tf2onnx import utils
from tf2onnx.tf_utils import get_tf_tensor_shape, get_tf_const_value, get_tf_shape_attr, get_tf_version
from tf2onnx.tf_loader import tf_reload_graph
@@ -32,7 +32,7 @@ def infer_shape(tf_graph, shape_override):
op_outputs_with_none_shape = check_shape_for_tf_graph(tf_graph)
if op_outputs_with_none_shape:
- if get_tf_version() > LooseVersion("1.5.0"):
+ if get_tf_version() > Version("1.5.0"):
for op, outs in op_outputs_with_none_shape.items():
logger.warning(
"Cannot infer shape for %s: %s",
diff --git a/tf2onnx/symbolic_executor.py b/tf2onnx/symbolic_executor.py
index 567147218..e8c8a5e6e 100644
--- a/tf2onnx/symbolic_executor.py
+++ b/tf2onnx/symbolic_executor.py
@@ -136,7 +136,7 @@ def compute_squeeze_unsqueeze(self, node, feed_dict):
def compute_cast(self, node, feed_dict):
inp = feed_dict[node.input[0]]
- if inp.dtype == np.object:
+ if inp.dtype == object:
return [inp]
np_dtype = utils.ONNX_TO_NUMPY_DTYPE[node.get_attr("to").i]
return [inp.astype(np_dtype)]
@@ -181,7 +181,7 @@ def compute_concat(self, node, feed_dict):
def compute_gather(self, node, feed_dict):
data = feed_dict[node.input[0]]
indices = feed_dict[node.input[1]]
- if indices.dtype == np.object:
+ if indices.dtype == object:
raise SymbolicExecutionException("Gather requires non-symbolic indices")
axis = node.get_attr_value("axis", 0)
return [np.take(data, indices, axis=axis)]
diff --git a/tf2onnx/tf_loader.py b/tf2onnx/tf_loader.py
index 22d909b4f..d9d72a8dc 100644
--- a/tf2onnx/tf_loader.py
+++ b/tf2onnx/tf_loader.py
@@ -5,7 +5,7 @@
import logging
import uuid
-from distutils.version import LooseVersion
+from packaging.version import Version
import tensorflow as tf
import numpy as np
@@ -75,7 +75,7 @@ def not_implemented_tf_placeholder(*args, **kwargs):
tf_placeholder = tf.compat.v1.placeholder
tf_placeholder_with_default = tf.compat.v1.placeholder_with_default
extract_sub_graph = tf.compat.v1.graph_util.extract_sub_graph
-elif LooseVersion(tf.__version__) >= "1.13":
+elif Version(tf.__version__) >= Version("1.13"):
# 1.13 introduced the compat namespace
tf_reset_default_graph = tf.compat.v1.reset_default_graph
tf_global_variables = tf.compat.v1.global_variables
@@ -162,7 +162,7 @@ def make_tensor_proto_wrapped(values, dtype=None, shape=None, verify_shape=False
try:
function_converter = _FunctionConverterData
- if LooseVersion(tf.__version__) >= "2.6.0":
+ if Version(tf.__version__) >= Version("2.6.0"):
from tensorflow.python.eager import context
from tensorflow.python.framework.convert_to_constants import _FunctionConverterDataInEager, \
_FunctionConverterDataInGraph
@@ -267,7 +267,7 @@ def from_function(func, input_names, output_names, large_model=False):
return convert_variables_to_constants_large_model(func)
try:
- if get_tf_version() < LooseVersion("2.2"):
+ if get_tf_version() < Version("2.2"):
frozen_func = convert_variables_to_constants_v2(func, lower_control_flow=False)
else:
frozen_func = convert_variables_to_constants_v2(func, lower_control_flow=False, aggressive_inlining=True)
@@ -687,7 +687,11 @@ def tf_optimize_grappler(input_names, output_names, graph_def):
'constfold', 'function'
]
- if LooseVersion(tf.__version__) >= "2.5":
+ if is_tf2():
+ # add for tf2.x lstm optimization.
+ rewrite_options.optimizers.append('dependency')
+
+ if Version(tf.__version__) >= Version("2.5"):
# This flag disables folding QDQ nodes around constants in the network (eg: around conv/FC weights)
rewrite_options.experimental_disable_folding_quantization_emulation = True
@@ -710,7 +714,7 @@ def tf_optimize(input_names, output_names, graph_def):
[utils.node_name(i) for i in output_names]
graph_def = extract_sub_graph(graph_def, needed_names)
- want_grappler = is_tf2() or LooseVersion(tf.__version__) >= "1.15"
+ want_grappler = is_tf2() or Version(tf.__version__) >= Version("1.15")
if want_grappler:
graph_def = tf_optimize_grappler(input_names, output_names, graph_def)
else:
@@ -730,7 +734,7 @@ def tf_optimize(input_names, output_names, graph_def):
def tf_reload_graph(tf_graph):
"""Invoke tensorflow cpp shape inference by reloading graph_def."""
# invoke c api if tf version is below 1.8
- if get_tf_version() < LooseVersion("1.8"):
+ if get_tf_version() < Version("1.8"):
logger.debug(
"On TF < 1.8, graph is constructed by python API, "
"which doesn't invoke shape inference, please set "
@@ -771,8 +775,8 @@ def toposort(data):
try:
func = function_def_to_graph(fdef, input_shapes=input_shapes)
except: # pylint: disable=bare-except
- # if there is a missmatch between caller and function use the functions shape
- logger.warning("shape missmatch between caller and function: %s", k)
+ # if there is a mismatch between caller and function use the functions shape
+ logger.warning("shape mismatch between caller and function: %s", k)
func = function_def_to_graph(fdef)
_FUNCTIONS[k] = func
_, _, _, _, _, tfunctions = tflist_to_onnx(func, {})
diff --git a/tf2onnx/tf_utils.py b/tf2onnx/tf_utils.py
index 5243b3a52..1f59adae4 100644
--- a/tf2onnx/tf_utils.py
+++ b/tf2onnx/tf_utils.py
@@ -6,7 +6,7 @@
"""
import collections
-from distutils.version import LooseVersion
+from packaging.version import Version
import numpy as np
import tensorflow as tf
@@ -50,15 +50,15 @@
def tf_to_onnx_tensor(tensor, name=""):
"""Convert tensorflow tensor to onnx tensor."""
np_data = get_tf_tensor_data(tensor)
- if np_data.dtype == np.object:
+ if np_data.dtype == object:
# assume np_data is string, numpy_helper.from_array accepts ndarray,
# in which each item is of str while the whole dtype is of object.
try:
# Faster but fails on Unicode
- np_data = np_data.astype(np.str).astype(np.object)
+ np_data = np_data.astype(np.str).astype(object)
except UnicodeDecodeError:
decode = np.vectorize(lambda x: x.decode('UTF-8'))
- np_data = decode(np_data).astype(np.object)
+ np_data = decode(np_data).astype(object)
except: # pylint: disable=bare-except
raise RuntimeError("Not support type: {}".format(type(np_data.flat[0])))
return numpy_helper.from_array(np_data, name=name)
@@ -121,7 +121,7 @@ def get_tf_node_attr(node, name):
def get_tf_version():
- return LooseVersion(tf.__version__)
+ return Version(tf.__version__)
def compress_graph_def(graph_def):
"""
diff --git a/tf2onnx/tflite_handlers/tfl_direct.py b/tf2onnx/tflite_handlers/tfl_direct.py
index d38aa8227..1d10729a9 100644
--- a/tf2onnx/tflite_handlers/tfl_direct.py
+++ b/tf2onnx/tflite_handlers/tfl_direct.py
@@ -12,6 +12,7 @@
@tfl_op("TFL_ABS", tf_op="Abs")
+@tfl_op("TFL_BATCH_MATMUL", tf_op="BatchMatMul")
@tfl_op("TFL_BROADCAST_TO", tf_op="BroadcastTo")
@tfl_op("TFL_CEIL", tf_op="Ceil")
@tfl_op("TFL_COS", tf_op="Cos")
@@ -30,6 +31,7 @@
@tfl_op("TFL_LOGICAL_AND", tf_op="LogicalAnd")
@tfl_op("TFL_LOGICAL_NOT", tf_op="LogicalNot")
@tfl_op("TFL_LOGICAL_OR", tf_op="LogicalOr")
+@tfl_op("TFL_MATMUL", tf_op="MatMul")
@tfl_op("TFL_MATRIX_DIAG", tf_op="MatrixDiag")
@tfl_op("TFL_MATRIX_SET_DIAG", tf_op="MatrixSetDiag")
@tfl_op("TFL_MAXIMUM", tf_op="Maximum")
@@ -86,6 +88,7 @@
@tfl_op("TFL_RFFT2D", tf_op="RFFT2D")
@tfl_op("TFL_COMPLEX_ABS", tf_op="ComplexAbs")
@tfl_op("TFL_HARD_SWISH", tf_op="HardSwish")
+@tfl_op("TFL_L2_NORMALIZATION", tf_op="L2Normalization")
class TflDirectOp:
@classmethod
def to_tf(cls, ctx, node, **kwargs):
diff --git a/tf2onnx/tflite_utils.py b/tf2onnx/tflite_utils.py
index b5974c850..6e3f2d024 100644
--- a/tf2onnx/tflite_utils.py
+++ b/tf2onnx/tflite_utils.py
@@ -156,9 +156,9 @@ def graphs_from_tflite(tflite_path, input_names=None, output_names=None):
if is_main_g:
# Override IO in main graph
utils.check_io(input_names, output_names, output_shapes.keys())
- if input_names is not None:
+ if input_names:
g_inputs = input_names
- if output_names is not None:
+ if output_names:
g_outputs = output_names
g = Graph(onnx_nodes, output_shapes, dtypes, input_names=g_inputs, output_names=g_outputs,
is_subgraph=not is_main_g, graph_name=graph_name)
@@ -271,7 +271,7 @@ def read_int(offset):
string_list = []
for i in range(count):
string_list.append(buffer_bytes[offset_list[i]:offset_list[i+1]].decode("utf-8"))
- return numpy_helper.from_array(np.array(string_list, dtype=np.object).reshape(shape))
+ return numpy_helper.from_array(np.array(string_list, dtype=object).reshape(shape))
def op_has_scalar_output(input_shapes, optype, attr):
diff --git a/tf2onnx/tfonnx.py b/tf2onnx/tfonnx.py
index 1a351cfcb..c2c881e77 100644
--- a/tf2onnx/tfonnx.py
+++ b/tf2onnx/tfonnx.py
@@ -329,6 +329,29 @@ def transpose_inputs(ctx, inputs_as_nchw):
ops.append(node)
ctx.reset_nodes(ops)
+def transpose_outputs(ctx, outputs_as_nchw):
+ """Insert a transpose from NHWC to NCHW on model output on users request."""
+ ops = []
+ for node in ctx.get_nodes():
+ for output_name in node.output:
+ if output_name in outputs_as_nchw:
+ shape = ctx.get_shape(output_name)
+ if len(shape) != len(constants.NHWC_TO_NCHW):
+ logger.warning("transpose_output for %s: shape must be rank 4, ignored" % output_name)
+ ops.append(node)
+ continue
+ # insert transpose
+ op_name = utils.make_name(node.name)
+ transpose = ctx.insert_new_node_on_output("Transpose", node.input[0], name=op_name)
+ transpose.set_attr("perm", constants.NHWC_TO_NCHW)
+ ctx.copy_shape(node.output[0], transpose.output[0])
+ ctx.set_shape(transpose.output[0], np.array(shape)[constants.NHWC_TO_NCHW])
+ ctx.set_shape(output_name, np.array(shape)[constants.NHWC_TO_NCHW])
+ ops.append(transpose)
+ ops.append(node)
+ continue
+ ops.append(node)
+ ctx.reset_nodes(ops)
def topological_sort(g, continue_on_error):
ops = g.get_nodes()
@@ -376,7 +399,7 @@ def run_rewriters(g, funcs, continue_on_error):
def process_tf_graph(tf_graph, continue_on_error=False, verbose=False, target=None,
opset=None, custom_op_handlers=None, custom_rewriter=None,
- extra_opset=None, shape_override=None, inputs_as_nchw=None,
+ extra_opset=None, shape_override=None, inputs_as_nchw=None, outputs_as_nchw=None,
input_names=None, output_names=None, ignore_default=None, use_default=None,
is_subgraph=False, const_node_values=None, tensors_to_rename=None,
initialized_tables=None, tflite_path=None, dequantize=False, tfjs_path=None):
@@ -391,7 +414,8 @@ def process_tf_graph(tf_graph, continue_on_error=False, verbose=False, target=No
custom_rewriter: list of custom graph rewriters
extra_opset: list of extra opset's, for example the opset's used by custom ops
shape_override: dict with inputs that override the shapes given by tensorflow
- inputs_as_nchw: transpose inputs in list from nchw to nhwc
+ inputs_as_nchw: transpose inputs in list from nhwc to nchw
+ outputs_as_nchw: transpose outputs in list from nhwc to nchw
input_names: list of input node names in graph, input name format as node_name:port_id. Optional.
output_names: list of output node names in graph, format is node_name:port_id. Optional for tflite.
ignore_default: list of node names of PlaceholderWithDefault ops to change into Placeholder ops
@@ -421,6 +445,8 @@ def process_tf_graph(tf_graph, continue_on_error=False, verbose=False, target=No
clear_functions()
if inputs_as_nchw is None:
inputs_as_nchw = []
+ if outputs_as_nchw is None:
+ outputs_as_nchw = []
is_tflite = False
if tflite_path is not None:
@@ -435,8 +461,8 @@ def process_tf_graph(tf_graph, continue_on_error=False, verbose=False, target=No
for g in [main_g] + subgraphs:
g.set_config(target, opset, extra_opset)
- g = process_graphs(main_g, subgraphs, custom_op_handlers, inputs_as_nchw, continue_on_error, custom_rewriter,
- initialized_tables, tensors_to_rename, is_tflite, dequantize)
+ g = process_graphs(main_g, subgraphs, custom_op_handlers, inputs_as_nchw, outputs_as_nchw, continue_on_error,
+ custom_rewriter, initialized_tables, tensors_to_rename, is_tflite, dequantize)
return g
@@ -476,24 +502,23 @@ def graphs_from_tf(tf_graph, input_names, output_names, shape_override=None, con
return main_g, subgraphs
-def process_graphs(main_g, subgraphs, custom_op_handlers, inputs_as_nchw, continue_on_error, custom_rewriter,
- initialized_tables, tensors_to_rename, is_tflite=False, dequantize=False):
-
+def process_graphs(main_g, subgraphs, custom_op_handlers, inputs_as_nchw, outputs_as_nchw, continue_on_error,
+ custom_rewriter, initialized_tables, tensors_to_rename, is_tflite=False, dequantize=False):
if tensors_to_rename is not None:
main_g.rename_tensors(tensors_to_rename)
inputs_as_nchw = [tensors_to_rename.get(t, t) for t in inputs_as_nchw]
+ outputs_as_nchw = [tensors_to_rename.get(t, t) for t in outputs_as_nchw]
for g in subgraphs:
- fg = process_parsed_graph(g, custom_op_handlers, inputs_as_nchw, continue_on_error, custom_rewriter,
- initialized_tables, is_tflite, dequantize)
+ fg = process_parsed_graph(g, custom_op_handlers, inputs_as_nchw, outputs_as_nchw, continue_on_error,
+ custom_rewriter, initialized_tables, is_tflite, dequantize)
set_function(fg.graph_name, fg)
- g = process_parsed_graph(main_g, custom_op_handlers, inputs_as_nchw, continue_on_error, custom_rewriter,
- initialized_tables, is_tflite,
- dequantize)
+ g = process_parsed_graph(main_g, custom_op_handlers, inputs_as_nchw, outputs_as_nchw, continue_on_error,
+ custom_rewriter, initialized_tables, is_tflite, dequantize)
return g
-def process_parsed_graph(g, custom_op_handlers, inputs_as_nchw, continue_on_error, custom_rewriter,
+def process_parsed_graph(g, custom_op_handlers, inputs_as_nchw, outputs_as_nchw, continue_on_error, custom_rewriter,
initialized_tables, is_tflite=False, dequantize=False):
op_cnt, attr_cnt = g.dump_node_statistics(include_attrs=True, include_subgraphs=False)
@@ -549,6 +574,8 @@ def compat_handler(ctx, node, **kwargs):
if inputs_as_nchw:
transpose_inputs(g, inputs_as_nchw)
+ if outputs_as_nchw:
+ transpose_outputs(g, outputs_as_nchw)
# pre-processing graph rewrites
# bi-directional re-writer should be placed after single directional re-writer
diff --git a/tf2onnx/utils.py b/tf2onnx/utils.py
index 4d5835cd4..d0c14d19c 100644
--- a/tf2onnx/utils.py
+++ b/tf2onnx/utils.py
@@ -42,10 +42,10 @@
onnx_pb.TensorProto.UINT64: np.uint64,
onnx_pb.TensorProto.INT64: np.int64,
onnx_pb.TensorProto.UINT64: np.uint64,
- onnx_pb.TensorProto.BOOL: np.bool,
+ onnx_pb.TensorProto.BOOL: bool,
onnx_pb.TensorProto.COMPLEX64: np.complex64,
onnx_pb.TensorProto.COMPLEX128: np.complex128,
- onnx_pb.TensorProto.STRING: np.object,
+ onnx_pb.TensorProto.STRING: object,
}
#
@@ -169,7 +169,7 @@ def make_onnx_inputs_outputs(name, elem_type, shape, **kwargs):
if elem_type is None:
elem_type = onnx_pb.TensorProto.UNDEFINED
elif isinstance(elem_type, SeqType):
- return helper.make_sequence_value_info(name, elem_type.dtype, make_onnx_shape(shape), **kwargs)
+ return helper.make_tensor_sequence_value_info(name, elem_type.dtype, make_onnx_shape(shape), **kwargs)
return helper.make_tensor_value_info(
name,
elem_type,
diff --git a/tf2onnx/version.py b/tf2onnx/version.py
index f799c9af3..10a9a9a23 100644
--- a/tf2onnx/version.py
+++ b/tf2onnx/version.py
@@ -1,5 +1,5 @@
# SPDX-License-Identifier: Apache-2.0
-version = '1.10.0'
-git_version = '0065af6273eac0e911a0e75eab1cdf6be1c9ac7b'
+version = '1.12.0'
+git_version = '087045d4b61e231897f1232de59609d30013b8f5'
diff --git a/tools/gen_doc.py b/tools/gen_doc.py
index 8f86a87b5..fa2497fcb 100644
--- a/tools/gen_doc.py
+++ b/tools/gen_doc.py
@@ -20,7 +20,7 @@
LATEST_OPSET = {
- "": 15, # default domain
+ "": 17, # default domain
"com.microsoft": 1, # microsoft domain
"ai.onnx.contrib": 1, # contrib ops
}