File tree Expand file tree Collapse file tree 1 file changed +3
-3
lines changed
doc/api/training/smp_versions/v1.2.0 Expand file tree Collapse file tree 1 file changed +3
-3
lines changed Original file line number Diff line number Diff line change @@ -140,16 +140,16 @@ This API document assumes you use the following import statements in your traini
140
140
computation. \ ``bucket_cap_mb ``\ controls the bucket size in MegaBytes
141
141
(MB).
142
142
143
- - ``trace_memory_usage `` (default: False): When set to True, the library attempts
143
+ - ``trace_memory_usage `` (default: False): When set to True, the library attempts
144
144
to measure memory usage per module during tracing. If this is disabled,
145
145
memory usage will be estimated through the sizes of tensors returned from
146
146
the module.
147
147
148
- - ``broadcast_buffers `` (default: True): Flag to be used with ``ddp=True ``.
148
+ - ``broadcast_buffers `` (default: True): Flag to be used with ``ddp=True ``.
149
149
This parameter is forwarded to the underlying ``DistributedDataParallel `` wrapper.
150
150
Please see: `broadcast_buffer <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel >`__.
151
151
152
- - ``gradient_as_bucket_view (PyTorch 1.7 only) `` (default: False): To be
152
+ - ``gradient_as_bucket_view (PyTorch 1.7 only) `` (default: False): To be
153
153
used with ``ddp=True ``. This parameter is forwarded to the underlying
154
154
``DistributedDataParallel `` wrapper. Please see `gradient_as_bucket_view <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel >`__.
155
155
You can’t perform that action at this time.
0 commit comments