File tree Expand file tree Collapse file tree 1 file changed +4
-4
lines changed Expand file tree Collapse file tree 1 file changed +4
-4
lines changed Original file line number Diff line number Diff line change 2
2
3
3
"""
4
4
(Beta) ``torch.export`` AOTInductor Tutorial for Python runtime
5
- ===================================================
5
+ ===============================================================
6
6
**Author:** Ankith Gunapal, Bin Bao, Angela Yi
7
7
"""
8
8
46
46
47
47
######################################################################
48
48
# Model Compilation
49
- # ------------
49
+ # -----------------
50
50
#
51
51
# We will use the TorchVision pretrained `ResNet18` model and TorchInductor on the
52
52
# exported PyTorch program using :func:`torch._inductor.aot_compile`.
101
101
102
102
######################################################################
103
103
# Model Inference in Python
104
- # ------------
104
+ # -------------------------
105
105
#
106
106
# Typically, the shared object generated above is used in a non-Python environment. In PyTorch 2.3,
107
107
# we added a new API called :func:`torch._export.aot_load` to load the shared library in the Python runtime.
127
127
128
128
######################################################################
129
129
# When to use AOTInductor for Python Runtime
130
- # ---------------------------------------
130
+ # ------------------------------------------
131
131
#
132
132
# One of the requirements for using AOTInductor is that the model shouldn't have any graph breaks.
133
133
# Once this requirement is met, the primary use case for using AOTInductor Python Runtime is for
You can’t perform that action at this time.
0 commit comments