Skip to content

Commit 753f05a

Browse files
wz337facebook-github-bot
authored andcommitted
Remove composable API's fully_shard from torchtnt example and test (#944)
Summary: Pull Request resolved: #944 The fully_shard name is now used by FSDP2 (torch.distributed._composable.fsdp.fully_shard) and the Composable API's fully_shard (torch.distributed._composable.fully_shard) is being deprecated. Therefore, we want to remove torch.distributed._composable.fully_shard from torchtnt as well. Deprecation message from PyTorch: https://github.com/pytorch/pytorch/blob/main/torch/distributed/_composable/fully_shard.py#L41-L48 Reviewed By: fegin Differential Revision: D65702749 fbshipit-source-id: a755fe4f0c7800184d62d958466445e313ceb796
1 parent 72df3db commit 753f05a

File tree

1 file changed

+1
-3
lines changed

1 file changed

+1
-3
lines changed

tests/utils/test_prepare_module_gpu.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,6 @@
99
import unittest
1010

1111
import torch
12-
13-
from torch.distributed._composable import fully_shard
1412
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
1513
from torch.distributed.fsdp.fully_sharded_data_parallel import MixedPrecision
1614
from torch.nn.parallel import DistributedDataParallel as DDP
@@ -93,7 +91,7 @@ def _test_is_fsdp_module() -> None:
9391
model = FSDP(torch.nn.Linear(1, 1, device=device))
9492
assert _is_fsdp_module(model)
9593
model = torch.nn.Linear(1, 1, device=device)
96-
fully_shard(model)
94+
model = FSDP(model)
9795
assert _is_fsdp_module(model)
9896

9997
@skip_if_not_distributed

0 commit comments

Comments
 (0)