Skip to content

Commit c0d0344

Browse files
committed
Merge branch 'main' of github.com:pytorch/tutorials into tutorial-audit-T228122175
2 parents b059f0c + a47d520 commit c0d0344

File tree

11 files changed

+42
-258
lines changed

11 files changed

+42
-258
lines changed

.ci/docker/requirements.txt

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,10 @@
11
# --extra-index-url https://download.pytorch.org/whl/cu117/index.html # Use this to run/publish tutorials against the latest binaries during the RC stage. Comment out after the release. Each release verify the correct cuda version.
22
# Refer to ./jenkins/build.sh for tutorial build instructions
33

4-
sphinx==5.0.0
4+
sphinx==5.3.0
55
sphinx-gallery==0.11.1
6-
sphinx_design
6+
sphinx-reredirects==0.1.4
7+
sphinx-design==0.4.0
78
docutils==0.16
89
sphinx-copybutton
910
sphinx_sitemap==2.6.0

.jenkins/build.sh

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -151,6 +151,12 @@ elif [[ "${JOB_TYPE}" == "manager" ]]; then
151151
# Step 7: push new HTML files and static files to gh-pages
152152
if [[ "$COMMIT_SOURCE" == "refs/heads/master" || "$COMMIT_SOURCE" == "refs/heads/main" ]]; then
153153
git clone https://github.com/pytorch/tutorials.git -b gh-pages gh-pages
154+
# Clean up directories that contain tutorials
155+
156+
for dir in beginner intermediate prototype recipes advanced distributed vision text audio; do
157+
rm -rf "gh-pages/$dir"
158+
done
159+
154160
cp -r docs/* gh-pages/
155161
pushd gh-pages
156162
# DANGER! DO NOT REMOVE THE `set +x` SETTING HERE!

beginner_source/hyperparameter_tuning_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@ def forward(self, x):
184184
# inputs, labels = inputs.to(device), labels.to(device)
185185
#
186186
# The code now supports training on CPUs, on a single GPU, and on multiple GPUs. Notably, Ray
187-
# also supports `fractional GPUs <https://docs.ray.io/en/master/using-ray-with-gpus.html#fractional-gpus>`_
187+
# also supports `fractional GPUs <https://docs.ray.io/en/latest/ray-core/scheduling/accelerators.html#fractional-accelerators>`_
188188
# so we can share GPUs among trials, as long as the model still fits on the GPU memory. We'll come back
189189
# to that later.
190190
#

beginner_source/saving_loading_models.py

Lines changed: 18 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -227,43 +227,30 @@
227227
# normalization layers to evaluation mode before running inference.
228228
# Failing to do this will yield inconsistent inference results.
229229
#
230-
# Export/Load Model in TorchScript Format
231-
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
230+
# Saving an Exported Program
231+
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
232232
#
233-
# One common way to do inference with a trained model is to use
234-
# `TorchScript <https://pytorch.org/docs/stable/jit.html>`__, an intermediate
235-
# representation of a PyTorch model that can be run in Python as well as in a
236-
# high performance environment like C++. TorchScript is actually the recommended model format
237-
# for scaled inference and deployment.
233+
# If you are using ``torch.export``, you can save and load your ``ExportedProgram`` using the
234+
# ``torch.export.save()`` and ``torch.export.load()`` APIs. with the ``.pt2`` file extension:
238235
#
239-
# .. note::
240-
# Using the TorchScript format, you will be able to load the exported model and
241-
# run inference without defining the model class.
242-
#
243-
# **Export:**
244-
#
245-
# .. code:: python
246-
#
247-
# model_scripted = torch.jit.script(model) # Export to TorchScript
248-
# model_scripted.save('model_scripted.pt') # Save
249-
#
250-
# **Load:**
236+
# .. code-block:: python
237+
#
238+
# class SimpleModel(torch.nn.Module):
239+
# def forward(self, x):
240+
# return x + 10
251241
#
252-
# .. code:: python
242+
# # Create a sample input
243+
# sample_input = torch.randn(5)
244+
#
245+
# # Export the model
246+
# exported_program = torch.export.export(SimpleModel(), sample_input)
253247
#
254-
# model = torch.jit.load('model_scripted.pt')
255-
# model.eval()
248+
# # Save the exported program
249+
# torch.export.save(exported_program, 'exported_program.pt2')
256250
#
257-
# Remember that you must call ``model.eval()`` to set dropout and batch
258-
# normalization layers to evaluation mode before running inference.
259-
# Failing to do this will yield inconsistent inference results.
251+
# # Load the exported program
252+
# saved_exported_program = torch.export.load('exported_program.pt2')
260253
#
261-
# For more information on TorchScript, feel free to visit the dedicated
262-
# `tutorials <https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html>`__.
263-
# You will get familiar with the tracing conversion and learn how to
264-
# run a TorchScript module in a `C++ environment <https://pytorch.org/tutorials/advanced/cpp_export.html>`__.
265-
266-
267254

268255
######################################################################
269256
# Saving & Loading a General Checkpoint for Inference and/or Resuming Training

conf.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@
4545
import plotly.io as pio
4646
from pathlib import Path
4747
pio.renderers.default = 'sphinx_gallery'
48-
48+
from redirects import redirects
4949

5050
import sphinx_gallery.gen_rst
5151
import multiprocessing
@@ -121,7 +121,8 @@ def wrapper(*args, **kwargs):
121121
'sphinx_copybutton',
122122
'sphinx_gallery.gen_gallery',
123123
'sphinx_design',
124-
'sphinx_sitemap'
124+
'sphinx_sitemap',
125+
'sphinx_reredirects'
125126
]
126127

127128
intersphinx_mapping = {

index.rst

Lines changed: 0 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -612,20 +612,6 @@ Welcome to PyTorch Tutorials
612612
:link: advanced/static_quantization_tutorial.html
613613
:tags: Quantization
614614

615-
.. customcarditem::
616-
:header: Grokking PyTorch Intel CPU Performance from First Principles
617-
:card_description: A case study on the TorchServe inference framework optimized with Intel® Extension for PyTorch.
618-
:image: _static/img/thumbnails/cropped/generic-pytorch-logo.png
619-
:link: intermediate/torchserve_with_ipex
620-
:tags: Model-Optimization,Production
621-
622-
.. customcarditem::
623-
:header: Grokking PyTorch Intel CPU Performance from First Principles (Part 2)
624-
:card_description: A case study on the TorchServe inference framework optimized with Intel® Extension for PyTorch (Part 2).
625-
:image: _static/img/thumbnails/cropped/generic-pytorch-logo.png
626-
:link: intermediate/torchserve_with_ipex_2
627-
:tags: Model-Optimization,Production
628-
629615
.. customcarditem::
630616
:header: Multi-Objective Neural Architecture Search with Ax
631617
:card_description: Learn how to use Ax to search over architectures find optimal tradeoffs between accuracy and latency.
@@ -1071,8 +1057,6 @@ Additional Resources
10711057
intermediate/dynamic_quantization_bert_tutorial
10721058
intermediate/quantized_transfer_learning_tutorial
10731059
advanced/static_quantization_tutorial
1074-
intermediate/torchserve_with_ipex
1075-
intermediate/torchserve_with_ipex_2
10761060
intermediate/nvfuser_intro_tutorial
10771061
intermediate/ax_multiobjective_nas_tutorial
10781062
intermediate/torch_compile_tutorial

intermediate_source/rpc_async_execution.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,7 @@ Prerequisites:
1515
This tutorial demonstrates how to build batch-processing RPC applications with
1616
the `@rpc.functions.async_execution <https://pytorch.org/docs/master/rpc.html#torch.distributed.rpc.functions.async_execution>`__
1717
decorator, which helps to speed up training by reducing the number of blocked
18-
RPC threads and consolidating CUDA operations on the callee. This shares the
19-
same idea as `Batch Inference with TorchServe <https://pytorch.org/serve/batch_inference_with_ts.html>`__.
18+
RPC threads and consolidating CUDA operations on the callee.
2019

2120
.. note:: This tutorial requires PyTorch v1.6.0 or above.
2221

prototype_source/pt2e_quant_xpu_inductor.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -85,6 +85,7 @@ We will start by performing the necessary imports, capturing the FX Graph from t
8585
exported_model = export_for_training(
8686
model,
8787
example_inputs,
88+
strict=True
8889
).module()
8990

9091

recipes_source/bundled_inputs.rst

Lines changed: 0 additions & 204 deletions
This file was deleted.

recipes_source/torch_compile_caching_tutorial.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -88,6 +88,9 @@ The aforementioned ``Mega-Cache`` is composed of individual components that can
8888
* ``InductorCache``: A bundle of ``FXGraphCache`` and ``Triton`` cache.
8989
* ``AOTAutogradCache``: A cache of joint graph artifacts.
9090
* ``PGO-cache``: A cache of dynamic shape decisions to reduce number of recompilations.
91+
* `AutotuningCache <https://github.com/pytorch/pytorch/blob/795a6a0affd349adfb4e3df298b604b74f27b44e/torch/_inductor/runtime/autotune_cache.py#L116>`__:
92+
* ``Inductor`` generates ``Triton`` kernels and benchmarks them to select the fastest kernels.
93+
* ``torch.compile``'s built-in ``AutotuningCache`` caches these results.
9194

9295
All these cache artifacts are written to ``TORCHINDUCTOR_CACHE_DIR`` which by default will look like ``/tmp/torchinductor_myusername``.
9396

0 commit comments

Comments
 (0)