Skip to content

Commit 9dc7552

Browse files
committed
Replace 'pytorch-labs' with 'meta-pytorch' in 4 files
1 parent 5b75a88 commit 9dc7552

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

intermediate_source/transformer_building_blocks.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@
6262
6363
If you are only interested in performant attention score modifications, please
6464
check out the `FlexAttention blog <https://pytorch.org/blog/flexattention/>`_ that
65-
contains a `gym of masks <https://github.com/pytorch-labs/attention-gym>`_.
65+
contains a `gym of masks <https://github.com/meta-pytorch/attention-gym>`_.
6666
6767
"""
6868

@@ -675,7 +675,7 @@ def benchmark(func, *args, **kwargs):
675675
# of the ``MultiheadAttention`` layer that allows for arbitrary modifications
676676
# to the attention score. The example below takes the ``alibi_mod``
677677
# that implements `ALiBi <https://arxiv.org/abs/2108.12409>`_ from
678-
# `attention gym <https://github.com/pytorch-labs/attention-gym>`_ and uses it
678+
# `attention gym <https://github.com/meta-pytorch/attention-gym>`_ and uses it
679679
# with nested input tensors.
680680

681681
from torch.nn.attention.flex_attention import flex_attention
@@ -892,8 +892,8 @@ def forward(self, x):
892892
# etc. Further, there are several good examples of using various performant building blocks to
893893
# implement various transformer architectures. Some examples include
894894
#
895-
# * `gpt-fast <https://github.com/pytorch-labs/gpt-fast>`_
896-
# * `segment-anything-fast <https://github.com/pytorch-labs/segment-anything-fast>`_
895+
# * `gpt-fast <https://github.com/meta-pytorch/gpt-fast>`_
896+
# * `segment-anything-fast <https://github.com/meta-pytorch/segment-anything-fast>`_
897897
# * `lucidrains implementation of NaViT with nested tensors <https://github.com/lucidrains/vit-pytorch/blob/73199ab486e0fad9eced2e3350a11681db08b61b/vit_pytorch/na_vit_nested_tensor.py>`_
898898
# * `torchtune's implementation of VisionTransformer <https://github.com/pytorch/torchtune/blob/a8a64ec6a99a6ea2be4fdaf0cd5797b03a2567cf/torchtune/modules/vision_transformer.py#L16>`_
899899

0 commit comments

Comments
 (0)