Skip to content

Commit da1e36c

Browse files
alanhdulexierule
authored andcommitted
Fix DistribType for ddp_cpu (spawn) (#7492)
1 parent 2e226d9 commit da1e36c

File tree

4 files changed

+33
-16
lines changed

4 files changed

+33
-16
lines changed

CHANGELOG.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,33 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
4242

4343
### Removed
4444

45+
- Prune deprecated classif. metrics from `pytorch_lightning.metrics.functional.classification` ([7499](https://github.com/PyTorchLightning/pytorch-lightning/pull/7499))
46+
47+
48+
- Removed deprecated data parallel classes `LightningDataParallel` and `LightningDistributedDataParallel` from `pytorch_lightning.overrides.data_parallel` ([7510](https://github.com/PyTorchLightning/pytorch-lightning/pull/7510))
49+
50+
51+
- Removed deprecated trainer attributes - `get_model` and `accelerator_backend` ([7502](https://github.com/PyTorchLightning/pytorch-lightning/pull/7502))
52+
53+
54+
- Removed deprecated utils modules `model_utils`, `warning_utils`, `xla_device_utils` and partially `argparse_utils` ([7503](https://github.com/PyTorchLightning/pytorch-lightning/pull/7503))
55+
56+
57+
- Removed deprecated trainer attributes - `on_cpu`, `on_tpu`, `use_tpu`, `on_gpu`, `use_dp`, `use_ddp`, `use_ddp2`, `use_horovod`, `use_single_gpu` ([#7501](https://github.com/PyTorchLightning/pytorch-lightning/pull/7501))
58+
59+
60+
### Fixed
61+
62+
63+
- Fixed parsing of multiple training dataloaders ([#7433](https://github.com/PyTorchLightning/pytorch-lightning/pull/7433))
64+
65+
66+
- Fixed recursive passing of `wrong_type` keyword argument in `pytorch_lightning.utilities.apply_to_collection` ([#7433](https://github.com/PyTorchLightning/pytorch-lightning/pull/7433))
67+
68+
69+
- Fixed setting correct `DistribType` for `ddp_cpu` (spawn) backend ([#7492](https://github.com/PyTorchLightning/pytorch-lightning/pull/7492))
70+
71+
4572
## [1.3.1] - 2021-05-11
4673

4774
### Fixed

pytorch_lightning/trainer/connectors/accelerator_connector.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -522,7 +522,7 @@ def set_distributed_mode(self, distributed_backend: Optional[str] = None):
522522

523523
# special case with DDP on CPUs
524524
if self.distributed_backend == "ddp_cpu":
525-
self._distrib_type = DistributedType.DDP
525+
self._distrib_type = DistributedType.DDP_SPAWN
526526
if self.num_gpus > 0:
527527
rank_zero_warn(
528528
'You requested one or more GPUs, but set the backend to `ddp_cpu`. Training will not use GPUs.'

tests/accelerators/test_accelerator_connector.py

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -437,13 +437,15 @@ def test_ipython_incompatible_backend_error(*_):
437437
with pytest.raises(MisconfigurationException, match="backend ddp is not compatible"):
438438
Trainer(accelerator="ddp", gpus=2)
439439

440-
with pytest.raises(MisconfigurationException, match="backend ddp is not compatible"):
441-
Trainer(accelerator="ddp_cpu", num_processes=2)
442-
443440
with pytest.raises(MisconfigurationException, match="backend ddp2 is not compatible"):
444441
Trainer(accelerator="ddp2", gpus=2)
445442

446443

444+
@mock.patch("pytorch_lightning.utilities._IS_INTERACTIVE", return_value=True)
445+
def test_ipython_compatible_backend(*_):
446+
Trainer(accelerator="ddp_cpu", num_processes=2)
447+
448+
447449
@pytest.mark.parametrize(
448450
["accelerator", "plugin"],
449451
[('ddp_spawn', 'ddp_sharded'), (None, 'ddp_sharded')],

tests/trainer/test_trainer.py

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1140,18 +1140,6 @@ def test_num_sanity_val_steps_neg_one(tmpdir, limit_val_batches):
11401140
num_processes=1,
11411141
),
11421142
),
1143-
(
1144-
dict(accelerator="dp", gpus=None),
1145-
dict(
1146-
use_dp=False,
1147-
use_ddp=False,
1148-
use_ddp2=False,
1149-
num_gpus=0,
1150-
on_gpu=False,
1151-
use_single_gpu=False,
1152-
num_processes=1,
1153-
),
1154-
),
11551143
(
11561144
dict(accelerator="ddp", gpus=None),
11571145
dict(

0 commit comments

Comments
 (0)