diff --git a/docathon-leaderboard.md b/docathon-leaderboard.md
index cebf297d0bf..de12a13cca8 100644
--- a/docathon-leaderboard.md
+++ b/docathon-leaderboard.md
@@ -60,7 +60,7 @@ were not merged or issues that have been closed without a merged PR.
| kiszk | 20 | https://github.com/pytorch/pytorch/pull/128337, https://github.com/pytorch/pytorch/pull/128123, https://github.com/pytorch/pytorch/pull/128022, https://github.com/pytorch/pytorch/pull/128312 |
| loganthomas | 19 | https://github.com/pytorch/pytorch/pull/128676, https://github.com/pytorch/pytorch/pull/128192, https://github.com/pytorch/pytorch/pull/128189, https://github.com/pytorch/tutorials/pull/2922, https://github.com/pytorch/tutorials/pull/2910, https://github.com/pytorch/xla/pull/7195 |
| ignaciobartol | 17 | https://github.com/pytorch/pytorch/pull/128741, https://github.com/pytorch/pytorch/pull/128135, https://github.com/pytorch/pytorch/pull/127938, https://github.com/pytorch/tutorials/pull/2936 |
-| arunppsg | 17 | https://github.com/pytorch/pytorch/pull/128391, https://github.com/pytorch/pytorch/pull/128021, https://github.com/pytorch/pytorch/pull/128018, https://github.com/pytorch-labs/torchfix/pull/59 |
+| arunppsg | 17 | https://github.com/pytorch/pytorch/pull/128391, https://github.com/pytorch/pytorch/pull/128021, https://github.com/pytorch/pytorch/pull/128018, https://github.com/meta-pytorch/torchfix/pull/59 |
| alperenunlu | 17 | https://github.com/pytorch/tutorials/pull/2934, https://github.com/pytorch/tutorials/pull/2909, https://github.com/pytorch/pytorch/pull/104043 |
| anandptl84 | 10 | https://github.com/pytorch/pytorch/pull/128196, https://github.com/pytorch/pytorch/pull/128098 |
| GdoongMathew | 10 | https://github.com/pytorch/pytorch/pull/128136, https://github.com/pytorch/pytorch/pull/128051 |
diff --git a/index.rst b/index.rst
index 38fdc16a041..603b42e4224 100644
--- a/index.rst
+++ b/index.rst
@@ -700,14 +700,14 @@ Welcome to PyTorch Tutorials
:header: Building an ExecuTorch iOS Demo App
:card_description: Explore how to set up the ExecuTorch iOS Demo App, which uses the MobileNet v3 model to process live camera images leveraging three different backends: XNNPACK, Core ML, and Metal Performance Shaders (MPS).
:image: _static/img/ExecuTorch-Logo-cropped.svg
- :link: https://github.com/pytorch-labs/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo
+ :link: https://github.com/meta-pytorch/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo
:tags: Edge
.. customcarditem::
:header: Building an ExecuTorch Android Demo App
:card_description: Learn how to set up the ExecuTorch Android Demo App for image segmentation tasks using the DeepLab v3 model and XNNPACK FP32 backend.
:image: _static/img/ExecuTorch-Logo-cropped.svg
- :link: https://github.com/pytorch-labs/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app
+ :link: https://github.com/meta-pytorch/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app
:tags: Edge
.. customcarditem::
diff --git a/intermediate_source/transformer_building_blocks.py b/intermediate_source/transformer_building_blocks.py
index 67860b85b79..8b887070048 100644
--- a/intermediate_source/transformer_building_blocks.py
+++ b/intermediate_source/transformer_building_blocks.py
@@ -62,7 +62,7 @@
If you are only interested in performant attention score modifications, please
check out the `FlexAttention blog `_ that
-contains a `gym of masks `_.
+contains a `gym of masks `_.
"""
@@ -675,7 +675,7 @@ def benchmark(func, *args, **kwargs):
# of the ``MultiheadAttention`` layer that allows for arbitrary modifications
# to the attention score. The example below takes the ``alibi_mod``
# that implements `ALiBi `_ from
-# `attention gym `_ and uses it
+# `attention gym `_ and uses it
# with nested input tensors.
from torch.nn.attention.flex_attention import flex_attention
@@ -892,8 +892,8 @@ def forward(self, x):
# etc. Further, there are several good examples of using various performant building blocks to
# implement various transformer architectures. Some examples include
#
-# * `gpt-fast `_
-# * `segment-anything-fast `_
+# * `gpt-fast `_
+# * `segment-anything-fast `_
# * `lucidrains implementation of NaViT with nested tensors `_
# * `torchtune's implementation of VisionTransformer `_
diff --git a/unstable_source/gpu_quantization_torchao_tutorial.py b/unstable_source/gpu_quantization_torchao_tutorial.py
index 2cea60b39d3..874f3227636 100644
--- a/unstable_source/gpu_quantization_torchao_tutorial.py
+++ b/unstable_source/gpu_quantization_torchao_tutorial.py
@@ -7,7 +7,7 @@
In this tutorial, we will walk you through the quantization and optimization
of the popular `segment anything model `_. These
steps will mimic some of those taken to develop the
-`segment-anything-fast `_
+`segment-anything-fast `_
repo. This step-by-step guide demonstrates how you can
apply these techniques to speed up your own models, especially those
that use transformers. To that end, we will focus on widely applicable