Skip to content

Commit 8886a13

Browse files
authored
Merge release 2.0.2 #573
2 parents 8919406 + 334271d commit 8886a13

File tree

2 files changed

+2
-6
lines changed

2 files changed

+2
-6
lines changed

configs/trainer/ddp.yaml

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,7 @@
11
defaults:
22
- default.yaml
33

4-
# use "ddp_spawn" instead of "ddp",
5-
# it's slower but normal "ddp" currently doesn't work ideally with hydra
6-
# https://github.com/facebookresearch/hydra/issues/2070
7-
# https://pytorch-lightning.readthedocs.io/en/latest/accelerators/gpu_intermediate.html#distributed-data-parallel-spawn
8-
strategy: ddp_spawn
4+
strategy: ddp
95

106
accelerator: gpu
117
devices: 4

src/models/mnist_module.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ def on_validation_epoch_end(self):
9797
self.val_acc_best(acc) # update best so far val acc
9898
# log `val_acc_best` as a value through `.compute()` method, instead of as a metric object
9999
# otherwise metric would be reset by lightning after each epoch
100-
self.log("val/acc_best", self.val_acc_best.compute(), prog_bar=True)
100+
self.log("val/acc_best", self.val_acc_best.compute(), sync_dist=True, prog_bar=True)
101101

102102
def test_step(self, batch: Any, batch_idx: int):
103103
loss, preds, targets = self.model_step(batch)

0 commit comments

Comments
 (0)