Skip to content

Commit b268a3c

Browse files
committed
fixed formatting
1 parent 45df5d0 commit b268a3c

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

recipes_source/torch_export_aoti_python.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
"""
44
(Beta) ``torch.export`` AOTInductor Tutorial for Python runtime
5-
===================================================
5+
===============================================================
66
**Author:** Ankith Gunapal, Bin Bao, Angela Yi
77
"""
88

@@ -46,7 +46,7 @@
4646

4747
######################################################################
4848
# Model Compilation
49-
# ------------
49+
# -----------------
5050
#
5151
# We will use the TorchVision pretrained `ResNet18` model and TorchInductor on the
5252
# exported PyTorch program using :func:`torch._inductor.aot_compile`.
@@ -101,7 +101,7 @@
101101

102102
######################################################################
103103
# Model Inference in Python
104-
# ------------
104+
# -------------------------
105105
#
106106
# Typically, the shared object generated above is used in a non-Python environment. In PyTorch 2.3,
107107
# we added a new API called :func:`torch._export.aot_load` to load the shared library in the Python runtime.
@@ -127,7 +127,7 @@
127127

128128
######################################################################
129129
# When to use AOTInductor for Python Runtime
130-
# ---------------------------------------
130+
# ------------------------------------------
131131
#
132132
# One of the requirements for using AOTInductor is that the model shouldn't have any graph breaks.
133133
# Once this requirement is met, the primary use case for using AOTInductor Python Runtime is for

0 commit comments

Comments
 (0)