Skip to content

Commit 8d52538

Browse files
committed
merge a few PRs for 1.6.2
2 parents 9ad16b1 + f5e4ed3 commit 8d52538

File tree

10 files changed

+139
-138
lines changed

10 files changed

+139
-138
lines changed

README.md

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,16 +2,16 @@
22

33
| Build Type | OS | Python | Tensorflow | Onnx opset | Status |
44
| --- | --- | --- | --- | --- | --- |
5-
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.6, 3.7 | 1.12-1.15, 2.1 | 7-11 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master) |
6-
| Unit Test - Full | Linux, MacOS, Windows | 3.6, 3.7 | 1.12-1.15, 2.1 | 7-11 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test-matrix?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=master) | |
5+
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.6, 3.7 | 1.12-1.15, 2.1-2.2 | 7-12 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master) |
6+
| Unit Test - Full | Linux, MacOS, Windows | 3.6, 3.7 | 1.12-1.15, 2.1-2.2 | 7-12 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test-matrix?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=master) | |
77

88
## Supported Versions
99

1010
### ONNX
1111

1212
tensorflow-onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found.
1313

14-
We support opset 6 to 11. By default we use opset 8 for the resulting ONNX graph since most runtimes will support opset 8.
14+
We support ONNX opset-6 to opset-12. By default we use opset-8 for the resulting ONNX graph since most runtimes will support opset-8.
1515
Support for future opsets add added as they are released.
1616

1717
If you want the graph to be generated with a specific opset, use ```--opset``` in the command line, for example ```--opset 11```.
@@ -20,13 +20,14 @@ If you want the graph to be generated with a specific opset, use ```--opset``` i
2020

2121
We support all ```tf-1.x graphs```. To keep our test matrix manageable we test tf2onnx running on top of ```tf-1.12 and up```. tf2onnx-1.5.4 was the last version that was tested all the way back to tf-1.4.
2222

23-
There is now ```experimental support for tf-2.x```. Basic unit tests are passing as well as control flow.
23+
There is now ```experimental support for tf-2.x```.
24+
With the exception of LSTM unit tests, all unit tests are enabled and passing.
2425
Unit tests that we still need to fix are marked with ```@skip_tf2```.
2526
GRU/LSTM's are converting but not runnable due to type/shape inference issues at runtime (working on that one).
26-
All unit tests are running in eager mode and after execution we take the python function, make it a graph and convert this to onnx.
27-
If running under tf-2.x we are using the tensorflow V2 controlflow.
27+
All unit tests are running in eager mode. After execution we take the python function, make it a graph and convert it to ONNX.
28+
When running under tf-2.x tf2onnx will use the tensorflow V2 controlflow.
2829

29-
You can install tf2onnx on top of tf-1.x or tf-2.x and convert tf-1.x or tf-2.x models.
30+
You can install tf2onnx on top of tf-1.x or tf-2.x.
3031

3132
### Python
3233

VERSION_NUMBER

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
1.6.1
1+
1.6.2

tf2onnx/graph.py

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -301,7 +301,7 @@ def set_tensor_value(self, new_val):
301301
self.set_attr("value", onnx_tensor)
302302
# track shapes in _output_shapes
303303
self._graph_check()
304-
self.graph.set_shape(onnx_tensor.name, onnx_tensor.dims)
304+
self.graph.set_shape(onnx_tensor.name, list(onnx_tensor.dims))
305305

306306
def get_body_graphs(self):
307307
self._graph_check()
@@ -484,6 +484,14 @@ def inputs(self):
484484
all_inputs.append(n)
485485
return all_inputs
486486

487+
def make_consts(self, values, np_type=np.int64, skip_conversion=False, raw=True):
488+
"""create list of consts of same type"""
489+
consts = []
490+
for value in values:
491+
np_val = np.array(value).astype(np_type)
492+
consts.append(self.make_const(utils.make_name("const"), np_val, skip_conversion, raw))
493+
return consts
494+
487495
def make_const(self, name, np_val, skip_conversion=False, raw=True):
488496
"""Make a new constant in the graph.
489497
Args:

tf2onnx/onnx_opset/generator.py

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -194,3 +194,15 @@ def version_8(cls, ctx, node, **kwargs):
194194
ctx.remove_node(node.name)
195195
ctx.add_graph_input(output_names[0], type_0, shape_0)
196196
ctx.add_graph_input(output_names[1], type_1, shape_1)
197+
198+
199+
@tf_op("QueueDequeueManyV2")
200+
class QueueDequeueManyV2:
201+
@classmethod
202+
def version_8(cls, ctx, node, **kwargs):
203+
outputs = node.output
204+
shapes = node.output_shapes
205+
dtypes = node.output_dtypes
206+
ctx.remove_node(node.name)
207+
for i, output in enumerate(outputs):
208+
ctx.add_graph_input(output, dtypes[i], shapes[i])

tf2onnx/onnx_opset/nn.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -248,6 +248,7 @@ def version_1(cls, ctx, node, **kwargs):
248248
# Note: inputs are reversed from what one would expect.
249249
conv_kernel_shape(ctx, node, 1)
250250
input_shape = ctx.get_shape(node.input[2])
251+
output_shape_orig = node.output_shapes
251252

252253
# ouput_shape is explicitly specified here, in this case pads values are auto generated/calculated.
253254
if node.inputs[0].is_const():
@@ -285,7 +286,8 @@ def version_1(cls, ctx, node, **kwargs):
285286
const_one_two = ctx.make_const(utils.make_name(node.name + "_const_one_two"),
286287
np.array([1, 2], dtype=np.int64))
287288
slice_node = ctx.make_node("Slice",
288-
[node.output[0], starts.output[0], ends.output[0], const_one_two.output[0]])
289+
[node.output[0], starts.output[0], ends.output[0], const_one_two.output[0]],
290+
shapes=output_shape_orig)
289291
downstream_nodes = ctx.find_output_consumers(node.output[0])
290292
downstream_nodes.remove(output_shape)
291293
downstream_nodes.remove(slice_node)

0 commit comments

Comments
 (0)