You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/tf_custom_op/custom_op.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,21 +2,21 @@
2
2
3
3
## Example of converting TensorFlow model with custom op to ONNX
4
4
5
-
This document describes the ways for exporting TensorFlow model with a custom operator, exporting the operator to ONNX format, and adding the operator to ONNX Runtime for model inference. Tensorflow provides abundant set of operators, and also provides the extending implmentation to register as the new operators. The new custom operators are usually not recognized by tf2onnx conversion and onnxruntime. So the TensorFlow custom ops should be exported using a combination of existing and/or new custom ONNX ops. Once the operator is converted to ONNX format, users can implement and register it with ONNX Runtime for model inference. This document explains the details of this process end-to-end, along with an example.
5
+
This document describes the ways for doing TensorFlow model conversion with a custom operator, converting the operator to ONNX format, and adding the operator to ONNX Runtime for model inference. Tensorflow provides abundant set of operators, and also provides the extending implmentation to register as the new operators. The new custom operators are usually not recognized by tf2onnx conversion and onnxruntime. So the TensorFlow custom ops should be converted using a combination of existing and/or new custom ONNX ops. Once the operator is converted to ONNX format, users can implement and register it with ONNX Runtime for model inference. This document explains the details of this process end-to-end, along with an example.
6
6
7
7
8
8
### Required Steps
9
9
10
10
-[1](#step1) - Adding the Tensorflow custom operator implementation in C++ and registering it with TensorFlow
11
-
-[2](#step2) - Exporting the custom Operator to ONNX, using:
11
+
-[2](#step2) - Converting the custom Operator to ONNX, using:
12
12
<br /> - a combination of existing ONNX ops
13
13
<br /> or
14
14
<br /> - a custom ONNX Operator
15
15
-[3](#step3) - Adding the custom operator implementation and registering it in ONNX Runtime (required only if using a custom ONNX op in step 2)
16
16
17
17
18
18
### Implement the Custom Operator
19
-
Firstly, try to install the TensorFlow latest version (Nighly is better) build refer to [here](https://github.com/tensorflow/tensorflow#install). And then implement the custom operators saving in TensorFlow library format and the file usually ends with `.so`. We have a simple example of `AddOne`, which is adding one for a tensor.
19
+
Firstly, try to install the TensorFlow latest version (Nighly is better) build refer to [here](https://github.com/tensorflow/tensorflow#install). And then implement the custom operators saving in TensorFlow library format and the file usually ends with `.so`. We have a simple example of `DoubleAndAddOne`, which is calculating `2x + 1` for a tensor.
Assuming you have g++ installed, here is the sequence of commands you can use to compile your op into a dynamic library.
93
93
```
94
94
TF_CFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_compile_flags()))') )
@@ -99,11 +99,11 @@ After below steps, we can get a TensorFlow custom op library `double_and_add_one
99
99
100
100
101
101
### Convert the Operator to ONNX
102
-
To be able to use this custom ONNX operator for inference, we need to add our custom operator to an inference engine. If the operator can be conbinded with exsiting [ONNX standard operators](https://github.com/onnx/onnx/blob/main/docs/Operators.md). The case will be easier:
102
+
To be able to use this custom ONNX operator for inference, we need to add our custom operator to an inference engine. If the operator can be combined with exsiting [ONNX standard operators](https://github.com/onnx/onnx/blob/main/docs/Operators.md). The case will be easier:
103
103
104
-
1- using[--load_op_libraries](https://github.com/onnx/tensorflow-onnx#--load_op_libraries) in conversion command or `tf.load_op_library()` method in code to load the TensorFlow custom ops library.
104
+
1- use[--load_op_libraries](https://github.com/onnx/tensorflow-onnx#--load_op_libraries) in conversion command or `tf.load_op_library()` method in code to load the TensorFlow custom ops library.
105
105
106
-
2- implement the op handler, registered it with the `@tf_op` decorator. Those handlers will be registered via the decorator on load of the module. [Here](https://github.com/onnx/tensorflow-onnx/tree/main/tf2onnx/onnx_opset) are examples of TensorFlow op hander implementations.
106
+
2- implement the op handler according to the op definitions, registered it with the `@tf_op` decorator. Those handlers will be registered via the decorator on load of the module. [Here](https://github.com/onnx/tensorflow-onnx/tree/main/tf2onnx/onnx_opset) are examples of TensorFlow op hander implementations, which all are combined with ONNX ops.
0 commit comments