You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To set up a local development environment, we recommend using `uv`, which can be installed following their [instructions](https://docs.astral.sh/uv/getting-started/installation/).
115
115
116
-
Once `uv` has been installed, begin by cloning the repository:
116
+
Once `uv` has been installed, begin by cloning the forked repository:
Copy file name to clipboardExpand all lines: docs/source-pytorch/accelerators/accelerator_prepare.rst
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -78,7 +78,7 @@ Synchronize validation and test logging
78
78
***************************************
79
79
80
80
When running in distributed mode, we have to ensure that the validation and test step logging calls are synchronized across processes.
81
-
This is done by adding ``sync_dist=True`` to all ``self.log`` calls in the validation and test step.
81
+
This is done by adding ``sync_dist=True`` to all ``self.log`` calls in the validation and test step. This will automatically average values across all processes.
82
82
This ensures that each GPU worker has the same behaviour when tracking model checkpoints, which is important for later downstream tasks such as testing the best checkpoint across all workers.
83
83
The ``sync_dist`` option can also be used in logging calls during the step methods, but be aware that this can lead to significant communication overhead and slow down your training.
0 commit comments