Skip to content

Commit b6fcaf7

Browse files
authored
Merge branch 'main' into execu-torch-mobile-warnings
2 parents 8f0405b + b3c49a3 commit b6fcaf7

File tree

4 files changed

+27
-10
lines changed

4 files changed

+27
-10
lines changed

README.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ If you are starting off with a Jupyter notebook, you can use [this script](https
3333

3434
## Building locally
3535

36-
The tutorial build is very large and requires a GPU. If your machine does not have a GPU device, you can preview your HTML build without actually downloading the data and running the tutorial code:
36+
The tutorial build is very large and requires a GPU. If your machine does not have a GPU device, you can preview your HTML build without actually downloading the data and running the tutorial code:
3737

3838
1. Install required dependencies by running: `pip install -r requirements.txt`.
3939

@@ -42,8 +42,6 @@ The tutorial build is very large and requires a GPU. If your machine does not ha
4242
- If you have a GPU-powered laptop, you can build using `make docs`. This will download the data, execute the tutorials and build the documentation to `docs/` directory. This might take about 60-120 min for systems with GPUs. If you do not have a GPU installed on your system, then see next step.
4343
- You can skip the computationally intensive graph generation by running `make html-noplot` to build basic html documentation to `_build/html`. This way, you can quickly preview your tutorial.
4444

45-
> If you get **ModuleNotFoundError: No module named 'pytorch_sphinx_theme' make: *** [html-noplot] Error 2** from /tutorials/src/pytorch-sphinx-theme or /venv/src/pytorch-sphinx-theme (while using virtualenv), run `python setup.py install`.
46-
4745
## Building a single tutorial
4846

4947
You can build a single tutorial by using the `GALLERY_PATTERN` environment variable. For example to run only `neural_style_transfer_tutorial.py`, run:
@@ -61,8 +59,8 @@ The `GALLERY_PATTERN` variable respects regular expressions.
6159

6260

6361
## About contributing to PyTorch Documentation and Tutorials
64-
* You can find information about contributing to PyTorch documentation in the
65-
PyTorch Repo [README.md](https://github.com/pytorch/pytorch/blob/master/README.md) file.
62+
* You can find information about contributing to PyTorch documentation in the
63+
PyTorch Repo [README.md](https://github.com/pytorch/pytorch/blob/master/README.md) file.
6664
* Additional information can be found in [PyTorch CONTRIBUTING.md](https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md).
6765

6866

_static/css/custom.css

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -100,3 +100,15 @@
100100
padding-left: 0px !important;
101101
padding-bottom: 0px !important;
102102
}
103+
104+
.gsc-search-button .gsc-search-button-v2:focus {
105+
border: transparent !important;
106+
outline: none;
107+
box-shadow: none;
108+
}
109+
.gsc-search-button-v2:active {
110+
border: none !important;
111+
}
112+
.gsc-search-button-v2 {
113+
border: none !important;
114+
}

advanced_source/cpp_custom_ops.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -174,6 +174,8 @@ To add ``torch.compile`` support for an operator, we must add a FakeTensor kerne
174174
known as a "meta kernel" or "abstract impl"). FakeTensors are Tensors that have
175175
metadata (such as shape, dtype, device) but no data: the FakeTensor kernel for an
176176
operator specifies how to compute the metadata of output tensors given the metadata of input tensors.
177+
The FakeTensor kernel should return dummy Tensors of your choice with
178+
the correct Tensor metadata (shape/strides/``dtype``/device).
177179

178180
We recommend that this be done from Python via the `torch.library.register_fake` API,
179181
though it is possible to do this from C++ as well (see

advanced_source/python_custom_ops.py

Lines changed: 10 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ def display(img):
6666
######################################################################
6767
# ``crop`` is not handled effectively out-of-the-box by
6868
# ``torch.compile``: ``torch.compile`` induces a
69-
# `"graph break" <https://pytorch.org/docs/stable/torch.compiler_faq.html#graph-breaks>`_
69+
# `"graph break" <https://pytorch.org/docs/stable/torch.compiler_faq.html#graph-breaks>`_
7070
# on functions it is unable to handle and graph breaks are bad for performance.
7171
# The following code demonstrates this by raising an error
7272
# (``torch.compile`` with ``fullgraph=True`` raises an error if a
@@ -85,9 +85,9 @@ def f(img):
8585
#
8686
# 1. wrap the function into a PyTorch custom operator.
8787
# 2. add a "``FakeTensor`` kernel" (aka "meta kernel") to the operator.
88-
# Given the metadata (e.g. shapes)
89-
# of the input Tensors, this function says how to compute the metadata
90-
# of the output Tensor(s).
88+
# Given some ``FakeTensors`` inputs (dummy Tensors that don't have storage),
89+
# this function should return dummy Tensors of your choice with the correct
90+
# Tensor metadata (shape/strides/``dtype``/device).
9191

9292

9393
from typing import Sequence
@@ -130,6 +130,11 @@ def f(img):
130130
# ``autograd.Function`` with PyTorch operator registration APIs can lead to (and
131131
# has led to) silent incorrectness when composed with ``torch.compile``.
132132
#
133+
# If you don't need training support, there is no need to use
134+
# ``torch.library.register_autograd``.
135+
# If you end up training with a ``custom_op`` that doesn't have an autograd
136+
# registration, we'll raise an error message.
137+
#
133138
# The gradient formula for ``crop`` is essentially ``PIL.paste`` (we'll leave the
134139
# derivation as an exercise to the reader). Let's first wrap ``paste`` into a
135140
# custom operator:
@@ -203,7 +208,7 @@ def setup_context(ctx, inputs, output):
203208
######################################################################
204209
# Mutable Python Custom operators
205210
# -------------------------------
206-
# You can also wrap a Python function that mutates its inputs into a custom
211+
# You can also wrap a Python function that mutates its inputs into a custom
207212
# operator.
208213
# Functions that mutate inputs are common because that is how many low-level
209214
# kernels are written; for example, a kernel that computes ``sin`` may take in

0 commit comments

Comments
 (0)