We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 1375373 commit 7158985Copy full SHA for 7158985
intermediate_source/torch_export_aoti_python.py
@@ -21,6 +21,8 @@
21
# In this tutorial, you will learn an end-to-end example of how to use AOTInductor for python runtime.
22
# We will look at how to use :func:`torch._export.aot_compile` to generate a shared library.
23
# Additionally, we will examine how to execute the shared library in Python runtime using :func:`torch._export.aot_load`.
24
+# You will learn about the speed up seen in the first inference time using AOTInductor, especially when using
25
+# ``max-autotune`` mode which can take some time to execute.
26
#
27
# **Contents**
28
0 commit comments