You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: torchtnt/framework/auto_unit.py
+10-2Lines changed: 10 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -434,6 +434,10 @@ class AutoUnit(
434
434
activation_checkpoint_params: params for enabling activation checkpointing
435
435
training: if True, the optimizer and optionally LR scheduler will be created after the class is initialized.
436
436
enable_compiled_autograd: if True, `compiled_autograd` will be used to compile the backward, this is an experimental flag.
437
+
loss_backward_retain_graph: If ``None`` or ``False``, the graph used to compute
438
+
the grads will be freed during loss backward pass. Note that in nearly all cases setting
439
+
this option to True is not needed and often can be worked around
440
+
in a much more efficient way.
437
441
438
442
Note:
439
443
Certain strategies, like :class:`~torchtnt.utils.prepare_module.FSDPStrategy` also support mixed precision as an argument, so can be configured through that class as well.
0 commit comments