|
1 | 1 | # -*- coding: utf-8 -*-
|
2 | 2 |
|
3 | 3 | """
|
4 |
| -torch.export AOT Inductor Tutorial for Python runtime |
| 4 | +(Beta) ``torch.export`` AOT Inductor Tutorial for Python runtime |
5 | 5 | ===================================================
|
6 | 6 | **Author:** Ankith Gunapal
|
7 | 7 | """
|
|
20 | 20 | #
|
21 | 21 | # In this tutorial, you will learn an end-to-end example of how to use AOTInductor for python runtime.
|
22 | 22 | # We will look at how to use :func:`torch._export.aot_compile` to generate a shared library.
|
23 |
| -# We also look at how we can run the shared library in python runtime using :func:`torch._export.aot_load`. |
| 23 | +# Additionally, we will examine how to execute the shared library in Python runtime using :func:`torch._export.aot_load`. |
24 | 24 | #
|
25 | 25 | # **Contents**
|
26 | 26 | #
|
|
32 | 32 | # Model Compilation
|
33 | 33 | # ------------
|
34 | 34 | #
|
35 |
| -# We will use TorchVision's pretrained `ResNet18` model in this example and use TorchInductor on the |
36 |
| -# exported PyTorch program using :func:`torch._export.aot_compile` |
| 35 | +# We will use TorchVision's pretrained `ResNet18` model and TorchInductor on the |
| 36 | +# exported PyTorch program using :func:`torch._export.aot_compile`. |
37 | 37 | #
|
38 | 38 | # .. note::
|
39 | 39 | #
|
40 | 40 | # This API also supports :func:`torch.compile` options like `mode`
|
41 | 41 | # As an example, if used on a CUDA enabled device, we can set `"max_autotune": True`
|
42 | 42 | #
|
43 |
| -# We also specify `dynamic_shapes` for the batch dimension. In this example, min=2 is not a bug and is |
| 43 | +# We also specify ``dynamic_shapes`` for the batch dimension. In this example, ``min=2`` is not a bug and is |
44 | 44 | # explained in `The 0/1 Specialization Problem <https://docs.google.com/document/d/16VPOa3d-Liikf48teAOmxLc92rgvJdfosIy-yoT38Io/edit?fbclid=IwAR3HNwmmexcitV0pbZm_x1a4ykdXZ9th_eJWK-3hBtVgKnrkmemz6Pm5jRQ#heading=h.ez923tomjvyk>`__
|
45 | 45 |
|
46 | 46 |
|
|
87 | 87 | # Model Inference in Python
|
88 | 88 | # ------------
|
89 | 89 | #
|
90 |
| -# Typically the shared object generated above is used in a non-Python environment. In PyTorch 2.3, |
91 |
| -# we added a new API :func:`torch._export.aot_load` to load the shared library in python runtime. |
| 90 | +# Typically, the shared object generated above is used in a non-Python environment. In PyTorch 2.3, |
| 91 | +# we added a new API called :func:`torch._export.aot_load` to load the shared library in the Python runtime. |
92 | 92 | # The API follows a similar structure to the :func:`torch.jit.load` API . We specify the path
|
93 |
| -# of the shared library and the device where this should be loaded. |
| 93 | +# of the shared library and the device where it should be loaded. |
94 | 94 | # .. note::
|
95 | 95 | #
|
96 |
| -# We specify batch_size=1 for inference and it works even though we specified min=2 in |
97 |
| -# :func:`torch._export.aot_compile` |
| 96 | +# In the example above, we specified ``batch_size=1`` for inference and it still functions correctly even though we specified ``min=2`` in |
| 97 | +# :func:`torch._export.aot_compile`. |
98 | 98 |
|
99 | 99 |
|
100 | 100 | import os
|
|
0 commit comments