Skip to content

Commit 310949e

Browse files
authored
Replace the description exporting to conversion (#1917)
* Replace the description exporting to conversion Signed-off-by: Deyu Huang <[email protected]> Co-authored-by: Emma Ning <> * fix typo Signed-off-by: Deyu Huang <[email protected]>
1 parent 8c47f22 commit 310949e

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

examples/tf_custom_op/custom_op.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,21 +2,21 @@
22

33
## Example of converting TensorFlow model with custom op to ONNX
44

5-
This document describes the ways for exporting TensorFlow model with a custom operator, exporting the operator to ONNX format, and adding the operator to ONNX Runtime for model inference. Tensorflow provides abundant set of operators, and also provides the extending implmentation to register as the new operators. The new custom operators are usually not recognized by tf2onnx conversion and onnxruntime. So the TensorFlow custom ops should be exported using a combination of existing and/or new custom ONNX ops. Once the operator is converted to ONNX format, users can implement and register it with ONNX Runtime for model inference. This document explains the details of this process end-to-end, along with an example.
5+
This document describes the ways for doing TensorFlow model conversion with a custom operator, converting the operator to ONNX format, and adding the operator to ONNX Runtime for model inference. Tensorflow provides abundant set of operators, and also provides the extending implmentation to register as the new operators. The new custom operators are usually not recognized by tf2onnx conversion and onnxruntime. So the TensorFlow custom ops should be converted using a combination of existing and/or new custom ONNX ops. Once the operator is converted to ONNX format, users can implement and register it with ONNX Runtime for model inference. This document explains the details of this process end-to-end, along with an example.
66

77

88
### Required Steps
99

1010
- [1](#step1) - Adding the Tensorflow custom operator implementation in C++ and registering it with TensorFlow
11-
- [2](#step2) - Exporting the custom Operator to ONNX, using:
11+
- [2](#step2) - Converting the custom Operator to ONNX, using:
1212
<br /> - a combination of existing ONNX ops
1313
<br /> or
1414
<br /> - a custom ONNX Operator
1515
- [3](#step3) - Adding the custom operator implementation and registering it in ONNX Runtime (required only if using a custom ONNX op in step 2)
1616

1717

1818
### Implement the Custom Operator
19-
Firstly, try to install the TensorFlow latest version (Nighly is better) build refer to [here](https://github.com/tensorflow/tensorflow#install). And then implement the custom operators saving in TensorFlow library format and the file usually ends with `.so`. We have a simple example of `AddOne`, which is adding one for a tensor.
19+
Firstly, try to install the TensorFlow latest version (Nighly is better) build refer to [here](https://github.com/tensorflow/tensorflow#install). And then implement the custom operators saving in TensorFlow library format and the file usually ends with `.so`. We have a simple example of `DoubleAndAddOne`, which is calculating `2x + 1` for a tensor.
2020

2121

2222
#### Define the op interface
@@ -88,7 +88,7 @@ REGISTER_KERNEL_BUILDER(Name("DoubleAndAddOne")
8888
```
8989
Save below code in C++ `.cc` file,
9090

91-
#### Using C++ compiler to compile the op
91+
#### Use C++ compiler to compile the op
9292
Assuming you have g++ installed, here is the sequence of commands you can use to compile your op into a dynamic library.
9393
```
9494
TF_CFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_compile_flags()))') )
@@ -99,11 +99,11 @@ After below steps, we can get a TensorFlow custom op library `double_and_add_one
9999

100100

101101
### Convert the Operator to ONNX
102-
To be able to use this custom ONNX operator for inference, we need to add our custom operator to an inference engine. If the operator can be conbinded with exsiting [ONNX standard operators](https://github.com/onnx/onnx/blob/main/docs/Operators.md). The case will be easier:
102+
To be able to use this custom ONNX operator for inference, we need to add our custom operator to an inference engine. If the operator can be combined with exsiting [ONNX standard operators](https://github.com/onnx/onnx/blob/main/docs/Operators.md). The case will be easier:
103103

104-
1- using [--load_op_libraries](https://github.com/onnx/tensorflow-onnx#--load_op_libraries) in conversion command or `tf.load_op_library()` method in code to load the TensorFlow custom ops library.
104+
1- use [--load_op_libraries](https://github.com/onnx/tensorflow-onnx#--load_op_libraries) in conversion command or `tf.load_op_library()` method in code to load the TensorFlow custom ops library.
105105

106-
2- implement the op handler, registered it with the `@tf_op` decorator. Those handlers will be registered via the decorator on load of the module. [Here](https://github.com/onnx/tensorflow-onnx/tree/main/tf2onnx/onnx_opset) are examples of TensorFlow op hander implementations.
106+
2- implement the op handler according to the op definitions, registered it with the `@tf_op` decorator. Those handlers will be registered via the decorator on load of the module. [Here](https://github.com/onnx/tensorflow-onnx/tree/main/tf2onnx/onnx_opset) are examples of TensorFlow op hander implementations, which all are combined with ONNX ops.
107107

108108
```
109109
import numpy as np

0 commit comments

Comments
 (0)