Skip to content

Commit c281495

Browse files
authored
Merge branch 'main' into jafraust/ddp
2 parents 29bf2b8 + d54656f commit c281495

File tree

11 files changed

+329
-20
lines changed

11 files changed

+329
-20
lines changed

.ci/docker/requirements.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ tqdm==4.66.1
2323
numpy==1.24.4
2424
matplotlib
2525
librosa
26-
torch==2.7
26+
torch==2.8
2727
torchvision
2828
torchdata
2929
networkx
@@ -37,8 +37,8 @@ tensorboard
3737
jinja2==3.1.3
3838
pytorch-lightning
3939
torchx
40-
torchrl==0.7.2
41-
tensordict==0.7.2
40+
torchrl==0.9.2
41+
tensordict==0.9.1
4242
# For ax_multiobjective_nas_tutorial.py
4343
ax-platform>=0.4.0,<0.5.0
4444
nbformat>=5.9.2

.jenkins/validate_tutorials_built.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@
2020
"beginner_source/examples_nn/polynomial_optim",
2121
"beginner_source/examples_autograd/polynomial_autograd",
2222
"beginner_source/examples_autograd/polynomial_custom_function",
23+
"intermediate_source/dqn_with_rnn_tutorial", #not working on 2.8 release reenable after 3514
2324
"intermediate_source/mnist_train_nas", # used by ax_multiobjective_nas_tutorial.py
2425
"intermediate_source/torch_compile_conv_bn_fuser",
2526
"intermediate_source/_torch_export_nightly_tutorial", # does not work on release
40.6 KB
Loading

docathon-leaderboard.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ were not merged or issues that have been closed without a merged PR.
6060
| kiszk | 20 | https://github.com/pytorch/pytorch/pull/128337, https://github.com/pytorch/pytorch/pull/128123, https://github.com/pytorch/pytorch/pull/128022, https://github.com/pytorch/pytorch/pull/128312 |
6161
| loganthomas | 19 | https://github.com/pytorch/pytorch/pull/128676, https://github.com/pytorch/pytorch/pull/128192, https://github.com/pytorch/pytorch/pull/128189, https://github.com/pytorch/tutorials/pull/2922, https://github.com/pytorch/tutorials/pull/2910, https://github.com/pytorch/xla/pull/7195 |
6262
| ignaciobartol | 17 | https://github.com/pytorch/pytorch/pull/128741, https://github.com/pytorch/pytorch/pull/128135, https://github.com/pytorch/pytorch/pull/127938, https://github.com/pytorch/tutorials/pull/2936 |
63-
| arunppsg | 17 | https://github.com/pytorch/pytorch/pull/128391, https://github.com/pytorch/pytorch/pull/128021, https://github.com/pytorch/pytorch/pull/128018, https://github.com/pytorch-labs/torchfix/pull/59 |
63+
| arunppsg | 17 | https://github.com/pytorch/pytorch/pull/128391, https://github.com/pytorch/pytorch/pull/128021, https://github.com/pytorch/pytorch/pull/128018, https://github.com/meta-pytorch/torchfix/pull/59 |
6464
| alperenunlu | 17 | https://github.com/pytorch/tutorials/pull/2934, https://github.com/pytorch/tutorials/pull/2909, https://github.com/pytorch/pytorch/pull/104043 |
6565
| anandptl84 | 10 | https://github.com/pytorch/pytorch/pull/128196, https://github.com/pytorch/pytorch/pull/128098 |
6666
| GdoongMathew | 10 | https://github.com/pytorch/pytorch/pull/128136, https://github.com/pytorch/pytorch/pull/128051 |

index.rst

Lines changed: 15 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,10 @@ Welcome to PyTorch Tutorials
33

44
**What's new in PyTorch tutorials?**
55

6-
* `Utilizing Torch Function modes with torch.compile <https://pytorch.org/tutorials/recipes/torch_compile_torch_function_modes.html>`__
7-
* `Context Parallel Tutorial <https://pytorch.org/tutorials/prototype/context_parallel.html>`__
8-
* `(beta) Explicit horizontal fusion with foreach_map and torch.compile <https://pytorch.org/tutorials/recipes/foreach_map.html>`__
9-
* Updated `Inductor Windows CPU Tutorial <https://pytorch.org/tutorials/prototype/inductor_windows.html>`__
6+
* `Integrating Custom Operators with SYCL for Intel GPU <https://pytorch.org/tutorials/advanced/cpp_custom_ops_sycl.html>`__
7+
* `Supporting Custom C++ Classes in torch.compile/torch.export <https://docs.pytorch.org/tutorials/advanced/custom_class_pt2.html>`__
8+
* `Accelerating torch.save and torch.load with GPUDirect Storage <https://docs.pytorch.org/tutorials/unstable/gpu_direct_storage.html>`__
9+
* `Getting Started with Fully Sharded Data Parallel (FSDP2) <https://docs.pytorch.org/tutorials/intermediate/FSDP_tutorial.html>`__
1010

1111
.. raw:: html
1212

@@ -24,7 +24,7 @@ Welcome to PyTorch Tutorials
2424
.. customcalloutitem::
2525
:description: Bite-size, ready-to-deploy PyTorch code examples.
2626
:header: PyTorch Recipes
27-
:button_link: recipes/recipes_index.html
27+
:button_link: recipes_index.html
2828
:button_text: Explore Recipes
2929

3030
.. End of callout item section
@@ -99,6 +99,13 @@ Welcome to PyTorch Tutorials
9999
:link: intermediate/pinmem_nonblock.html
100100
:tags: Getting-Started
101101

102+
.. customcarditem::
103+
:header: Visualizing Gradients in PyTorch
104+
:card_description: Visualize the gradient flow of a network.
105+
:image: _static/img/thumbnails/cropped/visualizing_gradients_tutorial.png
106+
:link: intermediate/visualizing_gradients_tutorial.html
107+
:tags: Getting-Started
108+
102109
.. Image/Video
103110
104111
.. customcarditem::
@@ -700,14 +707,14 @@ Welcome to PyTorch Tutorials
700707
:header: Building an ExecuTorch iOS Demo App
701708
:card_description: Explore how to set up the ExecuTorch iOS Demo App, which uses the MobileNet v3 model to process live camera images leveraging three different backends: XNNPACK, Core ML, and Metal Performance Shaders (MPS).
702709
:image: _static/img/ExecuTorch-Logo-cropped.svg
703-
:link: https://github.com/pytorch-labs/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo
710+
:link: https://github.com/meta-pytorch/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo
704711
:tags: Edge
705712

706713
.. customcarditem::
707714
:header: Building an ExecuTorch Android Demo App
708715
:card_description: Learn how to set up the ExecuTorch Android Demo App for image segmentation tasks using the DeepLab v3 model and XNNPACK FP32 backend.
709716
:image: _static/img/ExecuTorch-Logo-cropped.svg
710-
:link: https://github.com/pytorch-labs/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app
717+
:link: https://github.com/meta-pytorch/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app
711718
:tags: Edge
712719

713720
.. customcarditem::
@@ -837,4 +844,4 @@ Additional Resources
837844
:maxdepth: 1
838845
:hidden:
839846

840-
unstable_index
847+
prototype/prototype_index

intermediate_source/transformer_building_blocks.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@
6262
6363
If you are only interested in performant attention score modifications, please
6464
check out the `FlexAttention blog <https://pytorch.org/blog/flexattention/>`_ that
65-
contains a `gym of masks <https://github.com/pytorch-labs/attention-gym>`_.
65+
contains a `gym of masks <https://github.com/meta-pytorch/attention-gym>`_.
6666
6767
"""
6868

@@ -71,7 +71,7 @@
7171
# ===============================
7272
# First, we will briefly introduce the four technologies mentioned in the introduction
7373
#
74-
# * `torch.nested <https://pytorch.org/tutorials/prototype/nestedtensor.html>`_
74+
# * `torch.nested <https://pytorch.org/tutorials/unstable/nestedtensor.html>`_
7575
#
7676
# Nested tensors generalize the shape of regular dense tensors, allowing for
7777
# representation of ragged-sized data with the same tensor UX. In the context of
@@ -157,7 +157,7 @@
157157
# skipped, performance and memory usage improve.
158158
#
159159
# We'll demonstrate the above by building upon the ``MultiheadAttention`` layer in the
160-
# `Nested Tensor tutorial <https://pytorch.org/tutorials/prototype/nestedtensor.html>`_
160+
# `Nested Tensor tutorial <https://pytorch.org/tutorials/unstable/nestedtensor.html>`_
161161
# and comparing it to the ``nn.MultiheadAttention`` layer.
162162

163163
import torch
@@ -675,7 +675,7 @@ def benchmark(func, *args, **kwargs):
675675
# of the ``MultiheadAttention`` layer that allows for arbitrary modifications
676676
# to the attention score. The example below takes the ``alibi_mod``
677677
# that implements `ALiBi <https://arxiv.org/abs/2108.12409>`_ from
678-
# `attention gym <https://github.com/pytorch-labs/attention-gym>`_ and uses it
678+
# `attention gym <https://github.com/meta-pytorch/attention-gym>`_ and uses it
679679
# with nested input tensors.
680680

681681
from torch.nn.attention.flex_attention import flex_attention
@@ -892,8 +892,8 @@ def forward(self, x):
892892
# etc. Further, there are several good examples of using various performant building blocks to
893893
# implement various transformer architectures. Some examples include
894894
#
895-
# * `gpt-fast <https://github.com/pytorch-labs/gpt-fast>`_
896-
# * `segment-anything-fast <https://github.com/pytorch-labs/segment-anything-fast>`_
895+
# * `gpt-fast <https://github.com/meta-pytorch/gpt-fast>`_
896+
# * `segment-anything-fast <https://github.com/meta-pytorch/segment-anything-fast>`_
897897
# * `lucidrains implementation of NaViT with nested tensors <https://github.com/lucidrains/vit-pytorch/blob/73199ab486e0fad9eced2e3350a11681db08b61b/vit_pytorch/na_vit_nested_tensor.py>`_
898898
# * `torchtune's implementation of VisionTransformer <https://github.com/pytorch/torchtune/blob/a8a64ec6a99a6ea2be4fdaf0cd5797b03a2567cf/torchtune/modules/vision_transformer.py#L16>`_
899899

0 commit comments

Comments
 (0)