Skip to content

Commit cbf212e

Browse files
malfetpytorchmergebot
authored andcommitted
[CI] Fix doctest job if build without distributed (pytorch#165449)
Guard test with `TORCH_DOCTEST_DISTRIBUTED` and set it to true in run_test.py to be able to pass doctest for PyTorch build without distribtued support. This is a regression introduced by pytorch#164806 Fixes pytorch#165343 Pull Request resolved: pytorch#165449 Approved by: https://github.com/seemethere
1 parent d18e068 commit cbf212e

File tree

2 files changed

+4
-0
lines changed

2 files changed

+4
-0
lines changed

test/run_test.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1123,6 +1123,9 @@ def run_doctests(test_module, test_directory, options):
11231123
if torch.mps.is_available():
11241124
os.environ["TORCH_DOCTEST_MPS"] = "1"
11251125

1126+
if torch.distributed.is_available():
1127+
os.environ["TORCH_DOCTEST_DISTRIBUTED"] = "1"
1128+
11261129
if 0:
11271130
# TODO: could try to enable some of these
11281131
os.environ["TORCH_DOCTEST_QUANTIZED_DYNAMIC"] = "1"

torch/distributed/tensor/_dtensor_spec.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@ class ShardOrderEntry(NamedTuple):
2727
second, etc. This tuple is guaranteed to be non-empty.
2828
2929
Examples:
30+
>>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_DISTRIBUTED)
3031
>>> # Tensor dim 1 sharded across mesh dim 2, then mesh dim 0
3132
>>> ShardOrderEntry(tensor_dim=1, mesh_dims=(2, 0))
3233

0 commit comments

Comments
 (0)