Skip to content

Commit fbb9207

Browse files
authored
Merge branch 'main' into fix-2814-remove-backticks
2 parents 3637d96 + 0f312df commit fbb9207

File tree

2 files changed

+17
-6
lines changed

2 files changed

+17
-6
lines changed

advanced_source/dynamic_quantization_tutorial.py

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -134,10 +134,18 @@ def tokenize(self, path):
134134
# -----------------------------
135135
#
136136
# This is a tutorial on dynamic quantization, a quantization technique
137-
# that is applied after a model has been trained. Therefore, we'll simply load some
138-
# pretrained weights into this model architecture; these weights were obtained
139-
# by training for five epochs using the default settings in the word language model
140-
# example.
137+
# that is applied after a model has been trained. Therefore, we'll simply
138+
# load some pretrained weights into this model architecture; these
139+
# weights were obtained by training for five epochs using the default
140+
# settings in the word language model example.
141+
#
142+
# Before running this tutorial, download the required pre-trained model:
143+
#
144+
# .. code-block:: bash
145+
#
146+
# wget https://s3.amazonaws.com/pytorch-tutorial-assets/word_language_model_quantize.pth
147+
#
148+
# Place the downloaded file in the data directory or update the model_data_filepath accordingly.
141149

142150
ntokens = len(corpus.dictionary)
143151

beginner_source/examples_autograd/polynomial_custom_function.py

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,11 @@ def forward(ctx, input):
3333
"""
3434
In the forward pass we receive a Tensor containing the input and return
3535
a Tensor containing the output. ctx is a context object that can be used
36-
to stash information for backward computation. You can cache arbitrary
37-
objects for use in the backward pass using the ctx.save_for_backward method.
36+
to stash information for backward computation. You can cache tensors for
37+
use in the backward pass using the ``ctx.save_for_backward`` method. Other
38+
objects can be stored directly as attributes on the ctx object, such as
39+
``ctx.my_object = my_object``. Check out `Extending torch.autograd <https://docs.pytorch.org/docs/stable/notes/extending.html#extending-torch-autograd>`_
40+
for further details.
3841
"""
3942
ctx.save_for_backward(input)
4043
return 0.5 * (5 * input ** 3 - 3 * input)

0 commit comments

Comments
 (0)