We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 50246f5 commit 624def4Copy full SHA for 624def4
intermediate_source/transformer_building_blocks.py
@@ -157,7 +157,7 @@
157
# skipped, performance and memory usage improve.
158
#
159
# We'll demonstrate the above by building upon the ``MultiheadAttention`` layer in the
160
-# `Nested Tensor tutorial <https://pytorch.org/tutorials/prototype/nestedtensor.html>`_
+# `Nested Tensor tutorial <https://pytorch.org/tutorials/unstable/nestedtensor.html>`_
161
# and comparing it to the ``nn.MultiheadAttention`` layer.
162
163
import torch
0 commit comments