Skip to content

Commit cd09129

Browse files
agunapalsvekars
andauthored
Apply suggestions from code review
Co-authored-by: Svetlana Karslioglu <[email protected]>
1 parent 1dea278 commit cd09129

File tree

1 file changed

+10
-10
lines changed

1 file changed

+10
-10
lines changed

intermediate_source/torch_export_aoti_python.py

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# -*- coding: utf-8 -*-
22

33
"""
4-
torch.export AOT Inductor Tutorial for Python runtime
4+
(Beta) ``torch.export`` AOT Inductor Tutorial for Python runtime
55
===================================================
66
**Author:** Ankith Gunapal
77
"""
@@ -20,7 +20,7 @@
2020
#
2121
# In this tutorial, you will learn an end-to-end example of how to use AOTInductor for python runtime.
2222
# We will look at how to use :func:`torch._export.aot_compile` to generate a shared library.
23-
# We also look at how we can run the shared library in python runtime using :func:`torch._export.aot_load`.
23+
# Additionally, we will examine how to execute the shared library in Python runtime using :func:`torch._export.aot_load`.
2424
#
2525
# **Contents**
2626
#
@@ -32,15 +32,15 @@
3232
# Model Compilation
3333
# ------------
3434
#
35-
# We will use TorchVision's pretrained `ResNet18` model in this example and use TorchInductor on the
36-
# exported PyTorch program using :func:`torch._export.aot_compile`
35+
# We will use TorchVision's pretrained `ResNet18` model and TorchInductor on the
36+
# exported PyTorch program using :func:`torch._export.aot_compile`.
3737
#
3838
# .. note::
3939
#
4040
# This API also supports :func:`torch.compile` options like `mode`
4141
# As an example, if used on a CUDA enabled device, we can set `"max_autotune": True`
4242
#
43-
# We also specify `dynamic_shapes` for the batch dimension. In this example, min=2 is not a bug and is
43+
# We also specify ``dynamic_shapes`` for the batch dimension. In this example, ``min=2`` is not a bug and is
4444
# explained in `The 0/1 Specialization Problem <https://docs.google.com/document/d/16VPOa3d-Liikf48teAOmxLc92rgvJdfosIy-yoT38Io/edit?fbclid=IwAR3HNwmmexcitV0pbZm_x1a4ykdXZ9th_eJWK-3hBtVgKnrkmemz6Pm5jRQ#heading=h.ez923tomjvyk>`__
4545

4646

@@ -87,14 +87,14 @@
8787
# Model Inference in Python
8888
# ------------
8989
#
90-
# Typically the shared object generated above is used in a non-Python environment. In PyTorch 2.3,
91-
# we added a new API :func:`torch._export.aot_load` to load the shared library in python runtime.
90+
# Typically, the shared object generated above is used in a non-Python environment. In PyTorch 2.3,
91+
# we added a new API called :func:`torch._export.aot_load` to load the shared library in the Python runtime.
9292
# The API follows a similar structure to the :func:`torch.jit.load` API . We specify the path
93-
# of the shared library and the device where this should be loaded.
93+
# of the shared library and the device where it should be loaded.
9494
# .. note::
9595
#
96-
# We specify batch_size=1 for inference and it works even though we specified min=2 in
97-
# :func:`torch._export.aot_compile`
96+
# In the example above, we specified ``batch_size=1`` for inference and it still functions correctly even though we specified ``min=2`` in
97+
# :func:`torch._export.aot_compile`.
9898

9999

100100
import os

0 commit comments

Comments
 (0)