Skip to content

Commit 93003e4

Browse files
authored
Merge branch 'main' into dependabot/pip/dot-ci/docker/jinja2-3.1.4
2 parents b8c99ce + 0e530ea commit 93003e4

File tree

5 files changed

+20
-10
lines changed

5 files changed

+20
-10
lines changed

beginner_source/onnx/intro_onnx.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,13 +39,14 @@
3939
4040
- `ONNX <https://onnx.ai>`_ standard library
4141
- `ONNX Script <https://onnxscript.ai>`_ library that enables developers to author ONNX operators,
42-
functions and models using a subset of Python in an expressive, and yet simple fashion.
42+
functions and models using a subset of Python in an expressive, and yet simple fashion
43+
- `ONNX Runtime <https://onnxruntime.ai>`_ accelerated machine learning library.
4344
4445
They can be installed through `pip <https://pypi.org/project/pip/>`_:
4546
4647
.. code-block:: bash
4748
48-
pip install --upgrade onnx onnxscript
49+
pip install --upgrade onnx onnxscript onnxruntime
4950
5051
To validate the installation, run the following commands:
5152

conf.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,12 @@
6767
#
6868
# needs_sphinx = '1.0'
6969

70+
html_meta = {
71+
'description': 'Master PyTorch with our step-by-step tutorials for all skill levels. Start your journey to becoming a PyTorch expert today!',
72+
'keywords': 'PyTorch, tutorials, Getting Started, deep learning, AI',
73+
'author': 'PyTorch Contributors'
74+
}
75+
7076
# Add any Sphinx extension module names here, as strings. They can be
7177
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
7278
# ones.

index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ Welcome to PyTorch Tutorials
33

44
**What's new in PyTorch tutorials?**
55

6+
* `torch.export AOTInductor Tutorial for Python runtime (Beta) <https://pytorch.org/tutorials/recipes/torch_export_aoti_python.html>`__
67
* `A guide on good usage of non_blocking and pin_memory() in PyTorch <https://pytorch.org/tutorials/intermediate/pinmem_nonblock.html>`__
78
* `Introduction to Distributed Pipeline Parallelism <https://pytorch.org/tutorials/intermediate/pipelining_tutorial.html>`__
89
* `Introduction to Libuv TCPStore Backend <https://pytorch.org/tutorials/intermediate/TCPStore_libuv_backend.html>`__

prototype_source/gpu_quantization_torchao_tutorial.py

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -35,14 +35,12 @@
3535
#
3636
# Segment Anything Model checkpoint setup:
3737
#
38-
# 1. Go to the `segment-anything repo <checkpoint https://github.com/facebookresearch/segment-anything/tree/main#model-checkpoints>`_ and download the ``vit_h`` checkpoint. Alternatively, you can just use ``wget``: `wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth --directory-prefix=<path>
38+
# 1. Go to the `segment-anything repo checkpoint <https://github.com/facebookresearch/segment-anything/tree/main#model-checkpoints>`_ and download the ``vit_h`` checkpoint. Alternatively, you can use ``wget`` (for example, ``wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth --directory-prefix=<path>``).
3939
# 2. Pass in that directory by editing the code below to say:
4040
#
41-
# .. code-block::
42-
#
43-
# {sam_checkpoint_base_path}=<path>
41+
# .. code-block:: bash
4442
#
45-
# This was run on an A100-PG509-200 power limited to 330.00 W
43+
# {sam_checkpoint_base_path}=<path>
4644
#
4745

4846
import torch
@@ -297,7 +295,7 @@ def get_sam_model(only_one_block=False, batchsize=1):
297295
# -----------------
298296
# In this tutorial, we have learned about the quantization and optimization techniques
299297
# on the example of the segment anything model.
300-
298+
#
301299
# In the end, we achieved a full-model apples to apples quantization speedup
302300
# of about 7.7% on batch size 16 (677.28ms to 729.65ms). We can push this a
303301
# bit further by increasing the batch size and optimizing other parts of

recipes_source/torch_export_aoti_python.py

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,11 @@
11
# -*- coding: utf-8 -*-
22

33
"""
4-
(Beta) ``torch.export`` AOTInductor Tutorial for Python runtime
4+
.. meta::
5+
:description: An end-to-end example of how to use AOTInductor for Python runtime.
6+
:keywords: torch.export, AOTInductor, torch._inductor.aot_compile, torch._export.aot_load
7+
8+
``torch.export`` AOTInductor Tutorial for Python runtime (Beta)
59
===============================================================
610
**Author:** Ankith Gunapal, Bin Bao, Angela Yi
711
"""
@@ -18,7 +22,7 @@
1822
# a shared library that can be run in a non-Python environment.
1923
#
2024
#
21-
# In this tutorial, you will learn an end-to-end example of how to use AOTInductor for python runtime.
25+
# In this tutorial, you will learn an end-to-end example of how to use AOTInductor for Python runtime.
2226
# We will look at how to use :func:`torch._inductor.aot_compile` along with :func:`torch.export.export` to generate a
2327
# shared library. Additionally, we will examine how to execute the shared library in Python runtime using :func:`torch._export.aot_load`.
2428
# You will learn about the speed up seen in the first inference time using AOTInductor, especially when using

0 commit comments

Comments
 (0)