Skip to content

Commit 4cbdff3

Browse files
authored
Merge branch 'main' into fix/ppo-mujoco-colab
2 parents 1317311 + 3b23c0e commit 4cbdff3

File tree

4 files changed

+10
-10
lines changed

4 files changed

+10
-10
lines changed

docathon-leaderboard.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ were not merged or issues that have been closed without a merged PR.
6060
| kiszk | 20 | https://github.com/pytorch/pytorch/pull/128337, https://github.com/pytorch/pytorch/pull/128123, https://github.com/pytorch/pytorch/pull/128022, https://github.com/pytorch/pytorch/pull/128312 |
6161
| loganthomas | 19 | https://github.com/pytorch/pytorch/pull/128676, https://github.com/pytorch/pytorch/pull/128192, https://github.com/pytorch/pytorch/pull/128189, https://github.com/pytorch/tutorials/pull/2922, https://github.com/pytorch/tutorials/pull/2910, https://github.com/pytorch/xla/pull/7195 |
6262
| ignaciobartol | 17 | https://github.com/pytorch/pytorch/pull/128741, https://github.com/pytorch/pytorch/pull/128135, https://github.com/pytorch/pytorch/pull/127938, https://github.com/pytorch/tutorials/pull/2936 |
63-
| arunppsg | 17 | https://github.com/pytorch/pytorch/pull/128391, https://github.com/pytorch/pytorch/pull/128021, https://github.com/pytorch/pytorch/pull/128018, https://github.com/pytorch-labs/torchfix/pull/59 |
63+
| arunppsg | 17 | https://github.com/pytorch/pytorch/pull/128391, https://github.com/pytorch/pytorch/pull/128021, https://github.com/pytorch/pytorch/pull/128018, https://github.com/meta-pytorch/torchfix/pull/59 |
6464
| alperenunlu | 17 | https://github.com/pytorch/tutorials/pull/2934, https://github.com/pytorch/tutorials/pull/2909, https://github.com/pytorch/pytorch/pull/104043 |
6565
| anandptl84 | 10 | https://github.com/pytorch/pytorch/pull/128196, https://github.com/pytorch/pytorch/pull/128098 |
6666
| GdoongMathew | 10 | https://github.com/pytorch/pytorch/pull/128136, https://github.com/pytorch/pytorch/pull/128051 |

index.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -700,14 +700,14 @@ Welcome to PyTorch Tutorials
700700
:header: Building an ExecuTorch iOS Demo App
701701
:card_description: Explore how to set up the ExecuTorch iOS Demo App, which uses the MobileNet v3 model to process live camera images leveraging three different backends: XNNPACK, Core ML, and Metal Performance Shaders (MPS).
702702
:image: _static/img/ExecuTorch-Logo-cropped.svg
703-
:link: https://github.com/pytorch-labs/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo
703+
:link: https://github.com/meta-pytorch/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo
704704
:tags: Edge
705705

706706
.. customcarditem::
707707
:header: Building an ExecuTorch Android Demo App
708708
:card_description: Learn how to set up the ExecuTorch Android Demo App for image segmentation tasks using the DeepLab v3 model and XNNPACK FP32 backend.
709709
:image: _static/img/ExecuTorch-Logo-cropped.svg
710-
:link: https://github.com/pytorch-labs/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app
710+
:link: https://github.com/meta-pytorch/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app
711711
:tags: Edge
712712

713713
.. customcarditem::

intermediate_source/transformer_building_blocks.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@
6262
6363
If you are only interested in performant attention score modifications, please
6464
check out the `FlexAttention blog <https://pytorch.org/blog/flexattention/>`_ that
65-
contains a `gym of masks <https://github.com/pytorch-labs/attention-gym>`_.
65+
contains a `gym of masks <https://github.com/meta-pytorch/attention-gym>`_.
6666
6767
"""
6868

@@ -71,7 +71,7 @@
7171
# ===============================
7272
# First, we will briefly introduce the four technologies mentioned in the introduction
7373
#
74-
# * `torch.nested <https://pytorch.org/tutorials/prototype/nestedtensor.html>`_
74+
# * `torch.nested <https://pytorch.org/tutorials/unstable/nestedtensor.html>`_
7575
#
7676
# Nested tensors generalize the shape of regular dense tensors, allowing for
7777
# representation of ragged-sized data with the same tensor UX. In the context of
@@ -157,7 +157,7 @@
157157
# skipped, performance and memory usage improve.
158158
#
159159
# We'll demonstrate the above by building upon the ``MultiheadAttention`` layer in the
160-
# `Nested Tensor tutorial <https://pytorch.org/tutorials/prototype/nestedtensor.html>`_
160+
# `Nested Tensor tutorial <https://pytorch.org/tutorials/unstable/nestedtensor.html>`_
161161
# and comparing it to the ``nn.MultiheadAttention`` layer.
162162

163163
import torch
@@ -675,7 +675,7 @@ def benchmark(func, *args, **kwargs):
675675
# of the ``MultiheadAttention`` layer that allows for arbitrary modifications
676676
# to the attention score. The example below takes the ``alibi_mod``
677677
# that implements `ALiBi <https://arxiv.org/abs/2108.12409>`_ from
678-
# `attention gym <https://github.com/pytorch-labs/attention-gym>`_ and uses it
678+
# `attention gym <https://github.com/meta-pytorch/attention-gym>`_ and uses it
679679
# with nested input tensors.
680680

681681
from torch.nn.attention.flex_attention import flex_attention
@@ -892,8 +892,8 @@ def forward(self, x):
892892
# etc. Further, there are several good examples of using various performant building blocks to
893893
# implement various transformer architectures. Some examples include
894894
#
895-
# * `gpt-fast <https://github.com/pytorch-labs/gpt-fast>`_
896-
# * `segment-anything-fast <https://github.com/pytorch-labs/segment-anything-fast>`_
895+
# * `gpt-fast <https://github.com/meta-pytorch/gpt-fast>`_
896+
# * `segment-anything-fast <https://github.com/meta-pytorch/segment-anything-fast>`_
897897
# * `lucidrains implementation of NaViT with nested tensors <https://github.com/lucidrains/vit-pytorch/blob/73199ab486e0fad9eced2e3350a11681db08b61b/vit_pytorch/na_vit_nested_tensor.py>`_
898898
# * `torchtune's implementation of VisionTransformer <https://github.com/pytorch/torchtune/blob/a8a64ec6a99a6ea2be4fdaf0cd5797b03a2567cf/torchtune/modules/vision_transformer.py#L16>`_
899899

unstable_source/gpu_quantization_torchao_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
In this tutorial, we will walk you through the quantization and optimization
88
of the popular `segment anything model <https://github.com/facebookresearch/segment-anything>`_. These
99
steps will mimic some of those taken to develop the
10-
`segment-anything-fast <https://github.com/pytorch-labs/segment-anything-fast/blob/main/segment_anything_fast/modeling/image_encoder.py#L15>`_
10+
`segment-anything-fast <https://github.com/meta-pytorch/segment-anything-fast/blob/main/segment_anything_fast/modeling/image_encoder.py#L15>`_
1111
repo. This step-by-step guide demonstrates how you can
1212
apply these techniques to speed up your own models, especially those
1313
that use transformers. To that end, we will focus on widely applicable

0 commit comments

Comments
 (0)