You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source-pytorch/accelerators/accelerator_prepare.rst
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -78,7 +78,7 @@ Synchronize validation and test logging
78
78
***************************************
79
79
80
80
When running in distributed mode, we have to ensure that the validation and test step logging calls are synchronized across processes.
81
-
This is done by adding ``sync_dist=True`` to all ``self.log`` calls in the validation and test step.
81
+
This is done by adding ``sync_dist=True`` to all ``self.log`` calls in the validation and test step. This will automatically average values across all processes.
82
82
This ensures that each GPU worker has the same behaviour when tracking model checkpoints, which is important for later downstream tasks such as testing the best checkpoint across all workers.
83
83
The ``sync_dist`` option can also be used in logging calls during the step methods, but be aware that this can lead to significant communication overhead and slow down your training.
Copy file name to clipboardExpand all lines: docs/source-pytorch/extensions/logging.rst
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -137,7 +137,7 @@ The :meth:`~lightning.pytorch.core.LightningModule.log` method has a few options
137
137
* ``logger``: Logs to the logger like ``Tensorboard``, or any other custom logger passed to the :class:`~lightning.pytorch.trainer.trainer.Trainer` (Default: ``True``).
138
138
* ``reduce_fx``: Reduction function over step values for end of epoch. Uses :func:`torch.mean` by default and is not applied when a :class:`torchmetrics.Metric` is logged.
139
139
* ``enable_graph``: If True, will not auto detach the graph.
140
-
* ``sync_dist``: If True, reduces the metric across devices. Use with care as this may lead to a significant communication overhead.
140
+
* ``sync_dist``: If True, averages the metric across devices. Use with care as this may lead to a significant communication overhead.
141
141
* ``sync_dist_group``: The DDP group to sync across.
142
142
* ``add_dataloader_idx``: If True, appends the index of the current dataloader to the name (when using multiple dataloaders). If False, user needs to give unique names for each dataloader to not mix the values.
143
143
* ``batch_size``: Current batch size used for accumulating logs logged with ``on_epoch=True``. This will be directly inferred from the loaded batch, but for some data structures you might need to explicitly provide it.
0 commit comments