Skip to content

Commit fc1dfc2

Browse files
committed
Automated tutorials push
1 parent 360f818 commit fc1dfc2

File tree

208 files changed

+8887
-10016
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

208 files changed

+8887
-10016
lines changed

_downloads/0bd6b9a8e47e1d64e4d20ef356a6095d/onnx_registry_tutorial.ipynb

Lines changed: 11 additions & 187 deletions
Original file line numberDiff line numberDiff line change
@@ -54,60 +54,8 @@
5454
"In this tutorial, we will cover three scenarios that require extending\n",
5555
"the ONNX registry with custom operators:\n",
5656
"\n",
57-
"- Unsupported ATen operators\n",
5857
"- Custom operators with existing ONNX Runtime support\n",
59-
"- Custom operators without ONNX Runtime support\n",
60-
"\n",
61-
"Unsupported ATen operators\n",
62-
"==========================\n",
63-
"\n",
64-
"Although the ONNX exporter team does their best efforts to support all\n",
65-
"ATen operators, some of them might not be supported yet. In this\n",
66-
"section, we will demonstrate how you can add unsupported ATen operators\n",
67-
"to the ONNX Registry.\n",
68-
"\n",
69-
"<div style=\"background-color: #54c7ec; color: #fff; font-weight: 700; padding-left: 10px; padding-top: 5px; padding-bottom: 5px\"><strong>NOTE:</strong></div>\n",
70-
"\n",
71-
"<div style=\"background-color: #f3f4f7; padding-left: 10px; padding-top: 10px; padding-bottom: 10px; padding-right: 10px\">\n",
72-
"\n",
73-
"<p>The steps to implement unsupported ATen operators are the same to replace the implementation of an existingATen operator with a custom implementation.Because we don't actually have an unsupported ATen operator to use in this tutorial, we are going to leveragethis and replace the implementation of <code>aten::add.Tensor</code> with a custom implementation the same way we wouldif the operator was not present in the ONNX Registry.</p>\n",
74-
"\n",
75-
"</div>\n",
76-
"\n",
77-
"When a model cannot be exported to ONNX due to an unsupported operator,\n",
78-
"the ONNX exporter will show an error message similar to:\n",
79-
"\n",
80-
"``` {.python}\n",
81-
"RuntimeErrorWithDiagnostic: Unsupported FX nodes: {'call_function': ['aten.add.Tensor']}.\n",
82-
"```\n",
83-
"\n",
84-
"The error message indicates that the fully qualified name of unsupported\n",
85-
"ATen operator is `aten::add.Tensor`. The fully qualified name of an\n",
86-
"operator is composed of the namespace, operator name, and overload\n",
87-
"following the format `namespace::operator_name.overload`.\n",
88-
"\n",
89-
"To add support for an unsupported ATen operator or to replace the\n",
90-
"implementation for an existing one, we need:\n",
91-
"\n",
92-
"- The fully qualified name of the ATen operator (e.g.\n",
93-
" `aten::add.Tensor`). This information is always present in the error\n",
94-
" message as show above.\n",
95-
"- The implementation of the operator using [ONNX\n",
96-
" Script](https://github.com/microsoft/onnxscript). ONNX Script is a\n",
97-
" prerequisite for this tutorial. Please make sure you have read the\n",
98-
" [ONNX Script\n",
99-
" tutorial](https://github.com/microsoft/onnxscript/blob/main/docs/tutorial/index.md)\n",
100-
" before proceeding.\n",
101-
"\n",
102-
"Because `aten::add.Tensor` is already supported by the ONNX Registry, we\n",
103-
"will demonstrate how to replace it with a custom implementation, but\n",
104-
"keep in mind that the same steps apply to support new unsupported ATen\n",
105-
"operators.\n",
106-
"\n",
107-
"This is possible because the `OnnxRegistry`{.interpreted-text\n",
108-
"role=\"class\"} allows users to override an operator registration. We will\n",
109-
"override the registration of `aten::add.Tensor` with our custom\n",
110-
"implementation and verify it exists.\n"
58+
"- Custom operators without ONNX Runtime support\n"
11159
]
11260
},
11361
{
@@ -121,123 +69,7 @@
12169
"import torch\n",
12270
"import onnxruntime\n",
12371
"import onnxscript\n",
124-
"from onnxscript import opset18 # opset 18 is the latest (and only) supported version for now\n",
125-
"\n",
126-
"class Model(torch.nn.Module):\n",
127-
" def forward(self, input_x, input_y):\n",
128-
" return torch.ops.aten.add(input_x, input_y) # generates a aten::add.Tensor node\n",
129-
"\n",
130-
"input_add_x = torch.randn(3, 4)\n",
131-
"input_add_y = torch.randn(3, 4)\n",
132-
"aten_add_model = Model()\n",
133-
"\n",
134-
"\n",
135-
"# Now we create a ONNX Script function that implements ``aten::add.Tensor``.\n",
136-
"# The function name (e.g. ``custom_aten_add``) is displayed in the ONNX graph, so we recommend to use intuitive names.\n",
137-
"custom_aten = onnxscript.values.Opset(domain=\"custom.aten\", version=1)\n",
138-
"\n",
139-
"# NOTE: The function signature must match the signature of the unsupported ATen operator.\n",
140-
"# https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/native_functions.yaml\n",
141-
"# NOTE: All attributes must be annotated with type hints.\n",
142-
"@onnxscript.script(custom_aten)\n",
143-
"def custom_aten_add(input_x, input_y, alpha: float = 1.0):\n",
144-
" input_y = opset18.Mul(input_y, alpha)\n",
145-
" return opset18.Add(input_x, input_y)\n",
146-
"\n",
147-
"\n",
148-
"# Now we have everything we need to support unsupported ATen operators.\n",
149-
"# Let's register the ``custom_aten_add`` function to ONNX registry, and export the model to ONNX again.\n",
150-
"onnx_registry = torch.onnx.OnnxRegistry()\n",
151-
"onnx_registry.register_op(\n",
152-
" namespace=\"aten\", op_name=\"add\", overload=\"Tensor\", function=custom_aten_add\n",
153-
" )\n",
154-
"print(f\"aten::add.Tensor is supported by ONNX registry: \\\n",
155-
" {onnx_registry.is_registered_op(namespace='aten', op_name='add', overload='Tensor')}\"\n",
156-
" )\n",
157-
"export_options = torch.onnx.ExportOptions(onnx_registry=onnx_registry)\n",
158-
"onnx_program = torch.onnx.dynamo_export(\n",
159-
" aten_add_model, input_add_x, input_add_y, export_options=export_options\n",
160-
" )"
161-
]
162-
},
163-
{
164-
"cell_type": "markdown",
165-
"metadata": {},
166-
"source": [
167-
"Now let\\'s inspect the model and verify the model has a\n",
168-
"`custom_aten_add` instead of `aten::add.Tensor`. The graph has one graph\n",
169-
"node for `custom_aten_add`, and inside of it there are four function\n",
170-
"nodes, one for each operator, and one for constant attribute.\n"
171-
]
172-
},
173-
{
174-
"cell_type": "code",
175-
"execution_count": null,
176-
"metadata": {
177-
"collapsed": false
178-
},
179-
"outputs": [],
180-
"source": [
181-
"# graph node domain is the custom domain we registered\n",
182-
"assert onnx_program.model_proto.graph.node[0].domain == \"custom.aten\"\n",
183-
"assert len(onnx_program.model_proto.graph.node) == 1\n",
184-
"# graph node name is the function name\n",
185-
"assert onnx_program.model_proto.graph.node[0].op_type == \"custom_aten_add\"\n",
186-
"# function node domain is empty because we use standard ONNX operators\n",
187-
"assert {node.domain for node in onnx_program.model_proto.functions[0].node} == {\"\"}\n",
188-
"# function node name is the standard ONNX operator name\n",
189-
"assert {node.op_type for node in onnx_program.model_proto.functions[0].node} == {\"Add\", \"Mul\", \"Constant\"}"
190-
]
191-
},
192-
{
193-
"cell_type": "markdown",
194-
"metadata": {},
195-
"source": [
196-
"This is how `custom_aten_add_model` looks in the ONNX graph using\n",
197-
"Netron:\n",
198-
"\n",
199-
"![image](https://pytorch.org/tutorials/_static/img/onnx/custom_aten_add_model.png){.align-center\n",
200-
"width=\"70.0%\"}\n",
201-
"\n",
202-
"Inside the `custom_aten_add` function, we can see the three ONNX nodes\n",
203-
"we used in the function (`CastLike`, `Add`, and `Mul`), and one\n",
204-
"`Constant` attribute:\n",
205-
"\n",
206-
"![image](https://pytorch.org/tutorials/_static/img/onnx/custom_aten_add_function.png){.align-center\n",
207-
"width=\"70.0%\"}\n",
208-
"\n",
209-
"This was all that we needed to register the new ATen operator into the\n",
210-
"ONNX Registry. As an additional step, we can use ONNX Runtime to run the\n",
211-
"model, and compare the results with PyTorch.\n"
212-
]
213-
},
214-
{
215-
"cell_type": "code",
216-
"execution_count": null,
217-
"metadata": {
218-
"collapsed": false
219-
},
220-
"outputs": [],
221-
"source": [
222-
"# Use ONNX Runtime to run the model, and compare the results with PyTorch\n",
223-
"onnx_program.save(\"./custom_add_model.onnx\")\n",
224-
"ort_session = onnxruntime.InferenceSession(\n",
225-
" \"./custom_add_model.onnx\", providers=['CPUExecutionProvider']\n",
226-
" )\n",
227-
"\n",
228-
"def to_numpy(tensor):\n",
229-
" return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()\n",
230-
"\n",
231-
"onnx_input = onnx_program.adapt_torch_inputs_to_onnx(input_add_x, input_add_y)\n",
232-
"onnxruntime_input = {k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)}\n",
233-
"onnxruntime_outputs = ort_session.run(None, onnxruntime_input)\n",
234-
"\n",
235-
"torch_outputs = aten_add_model(input_add_x, input_add_y)\n",
236-
"torch_outputs = onnx_program.adapt_torch_outputs_to_onnx(torch_outputs)\n",
237-
"\n",
238-
"assert len(torch_outputs) == len(onnxruntime_outputs)\n",
239-
"for torch_output, onnxruntime_output in zip(torch_outputs, onnxruntime_outputs):\n",
240-
" torch.testing.assert_close(torch_output, torch.tensor(onnxruntime_output))"
72+
"from onnxscript import opset18 # opset 18 is the latest (and only) supported version for now"
24173
]
24274
},
24375
{
@@ -385,12 +217,11 @@
385217
"def to_numpy(tensor):\n",
386218
" return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()\n",
387219
"\n",
388-
"onnx_input = onnx_program.adapt_torch_inputs_to_onnx(input_gelu_x)\n",
220+
"onnx_input = [input_gelu_x]\n",
389221
"onnxruntime_input = {k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)}\n",
390-
"onnxruntime_outputs = ort_session.run(None, onnxruntime_input)\n",
222+
"onnxruntime_outputs = ort_session.run(None, onnxruntime_input)[0]\n",
391223
"\n",
392224
"torch_outputs = aten_gelu_model(input_gelu_x)\n",
393-
"torch_outputs = onnx_program.adapt_torch_outputs_to_onnx(torch_outputs)\n",
394225
"\n",
395226
"assert len(torch_outputs) == len(onnxruntime_outputs)\n",
396227
"for torch_output, onnxruntime_output in zip(torch_outputs, onnxruntime_outputs):\n",
@@ -525,27 +356,20 @@
525356
"outputs": [],
526357
"source": [
527358
"assert onnx_program.model_proto.graph.node[0].domain == \"test.customop\"\n",
528-
"assert onnx_program.model_proto.graph.node[0].op_type == \"custom_addandround\"\n",
529-
"assert onnx_program.model_proto.functions[0].node[0].domain == \"test.customop\"\n",
530-
"assert onnx_program.model_proto.functions[0].node[0].op_type == \"CustomOpOne\"\n",
531-
"assert onnx_program.model_proto.functions[0].node[1].domain == \"test.customop\"\n",
532-
"assert onnx_program.model_proto.functions[0].node[1].op_type == \"CustomOpTwo\""
359+
"assert onnx_program.model_proto.graph.node[0].op_type == \"CustomOpOne\"\n",
360+
"assert onnx_program.model_proto.graph.node[1].domain == \"test.customop\"\n",
361+
"assert onnx_program.model_proto.graph.node[1].op_type == \"CustomOpTwo\""
533362
]
534363
},
535364
{
536365
"cell_type": "markdown",
537366
"metadata": {},
538367
"source": [
539-
"This is how `custom_addandround_model` ONNX graph looks using Netron:\n",
540-
"\n",
541-
"![image](https://pytorch.org/tutorials/_static/img/onnx/custom_addandround_model.png){.align-center\n",
542-
"width=\"70.0%\"}\n",
543-
"\n",
544-
"Inside the `custom_addandround` function, we can see the two custom\n",
545-
"operators we used in the function (`CustomOpOne`, and `CustomOpTwo`),\n",
546-
"and they are from module `test.customop`:\n",
368+
"This is how `custom_addandround_model` ONNX graph looks using Netron. We\n",
369+
"can see the two custom operators we used in the function (`CustomOpOne`,\n",
370+
"and `CustomOpTwo`), and they are from module `test.customop`:\n",
547371
"\n",
548-
"![image](https://pytorch.org/tutorials/_static/img/onnx/custom_addandround_function.png)\n",
372+
"![image](https://pytorch.org/tutorials/_static/img/onnx/custom_addandround.png)\n",
549373
"\n",
550374
"Custom Ops Registration in ONNX Runtime\n",
551375
"=======================================\n",

_downloads/1c960eb430ba8694b1655bb03904dac2/export_simple_model_to_onnx_tutorial.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
Export a PyTorch model to ONNX
88
==============================
99
10-
**Author**: `Thiago Crepaldi <https://github.com/thiagocrepaldi>`_
10+
**Author**: `Ti-Tai Wang <https://github.com/titaiwangms>`_ and `Xavier Dupré <https://github.com/xadupre>`_
1111
1212
.. note::
1313
As of PyTorch 2.1, there are two versions of ONNX Exporter.
@@ -127,7 +127,7 @@ def forward(self, x):
127127
# Once Netron is open, we can drag and drop our ``my_image_classifier.onnx`` file into the browser or select it after
128128
# clicking the **Open model** button.
129129
#
130-
# .. image:: ../../_static/img/onnx/image_clossifier_onnx_modelon_netron_web_ui.png
130+
# .. image:: ../../_static/img/onnx/image_classifier_onnx_model_on_netron_web_ui.png
131131
# :width: 50%
132132
#
133133
#
@@ -155,7 +155,7 @@ def forward(self, x):
155155

156156
import onnxruntime
157157

158-
onnx_input = onnx_program.adapt_torch_inputs_to_onnx(torch_input)
158+
onnx_input = [torch_input]
159159
print(f"Input length: {len(onnx_input)}")
160160
print(f"Sample input: {onnx_input}")
161161

@@ -166,7 +166,8 @@ def to_numpy(tensor):
166166

167167
onnxruntime_input = {k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)}
168168

169-
onnxruntime_outputs = ort_session.run(None, onnxruntime_input)
169+
# onnxruntime returns a list of outputs
170+
onnxruntime_outputs = ort_session.run(None, onnxruntime_input)[0]
170171

171172
####################################################################
172173
# 7. Compare the PyTorch results with the ones from the ONNX Runtime
@@ -179,7 +180,6 @@ def to_numpy(tensor):
179180
# Before comparing the results, we need to convert the PyTorch's output to match ONNX's format.
180181

181182
torch_outputs = torch_model(torch_input)
182-
torch_outputs = onnx_program.adapt_torch_outputs_to_onnx(torch_outputs)
183183

184184
assert len(torch_outputs) == len(onnxruntime_outputs)
185185
for torch_output, onnxruntime_output in zip(torch_outputs, onnxruntime_outputs):

_downloads/33f8140bedc02273a55c752fe79058e5/intro_onnx.ipynb

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,8 @@
2424
"Introduction to ONNX\n",
2525
"====================\n",
2626
"\n",
27-
"Authors: [Thiago Crepaldi](https://github.com/thiagocrepaldi),\n",
27+
"Authors: [Ti-Tai Wang](https://github.com/titaiwangms) and [Xavier\n",
28+
"Dupré](https://github.com/xadupre)\n",
2829
"\n",
2930
"[Open Neural Network eXchange (ONNX)](https://onnx.ai/) is an open\n",
3031
"standard format for representing machine learning models. The\n",

_downloads/8dd55b9d6d32d45fae2642c7ffbf454e/export_simple_model_to_onnx_tutorial.ipynb

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,8 @@
2424
"Export a PyTorch model to ONNX\n",
2525
"==============================\n",
2626
"\n",
27-
"**Author**: [Thiago Crepaldi](https://github.com/thiagocrepaldi)\n",
27+
"**Author**: [Ti-Tai Wang](https://github.com/titaiwangms) and [Xavier\n",
28+
"Dupré](https://github.com/xadupre)\n",
2829
"\n",
2930
"<div style=\"background-color: #54c7ec; color: #fff; font-weight: 700; padding-left: 10px; padding-top: 5px; padding-bottom: 5px\"><strong>NOTE:</strong></div>\n",
3031
"\n",
@@ -212,7 +213,7 @@
212213
"file into the browser or select it after clicking the **Open model**\n",
213214
"button.\n",
214215
"\n",
215-
"![image](https://pytorch.org/tutorials/_static/img/onnx/image_clossifier_onnx_modelon_netron_web_ui.png){width=\"50.0%\"}\n",
216+
"![image](https://pytorch.org/tutorials/_static/img/onnx/image_classifier_onnx_model_on_netron_web_ui.png){width=\"50.0%\"}\n",
216217
"\n",
217218
"And that is it! We have successfully exported our PyTorch model to ONNX\n",
218219
"format and visualized it with Netron.\n",
@@ -255,7 +256,7 @@
255256
"source": [
256257
"import onnxruntime\n",
257258
"\n",
258-
"onnx_input = onnx_program.adapt_torch_inputs_to_onnx(torch_input)\n",
259+
"onnx_input = [torch_input]\n",
259260
"print(f\"Input length: {len(onnx_input)}\")\n",
260261
"print(f\"Sample input: {onnx_input}\")\n",
261262
"\n",
@@ -266,7 +267,8 @@
266267
"\n",
267268
"onnxruntime_input = {k.name: to_numpy(v) for k, v in zip(ort_session.get_inputs(), onnx_input)}\n",
268269
"\n",
269-
"onnxruntime_outputs = ort_session.run(None, onnxruntime_input)"
270+
"# onnxruntime returns a list of outputs\n",
271+
"onnxruntime_outputs = ort_session.run(None, onnxruntime_input)[0]"
270272
]
271273
},
272274
{
@@ -294,7 +296,6 @@
294296
"outputs": [],
295297
"source": [
296298
"torch_outputs = torch_model(torch_input)\n",
297-
"torch_outputs = onnx_program.adapt_torch_outputs_to_onnx(torch_outputs)\n",
298299
"\n",
299300
"assert len(torch_outputs) == len(onnxruntime_outputs)\n",
300301
"for torch_output, onnxruntime_output in zip(torch_outputs, onnxruntime_outputs):\n",

0 commit comments

Comments
 (0)