You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: torchtnt/framework/auto_unit.py
+24-9Lines changed: 24 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -471,6 +471,7 @@ class AutoUnit(
471
471
this option to True is not needed and often can be worked around
472
472
in a much more efficient way.
473
473
enable_prefetch: if True, the data will be prefetched to the device before the next batch is loaded
474
+
zero_grad_at_train_step_start: if True, the optimizer's gradients will be zeroed at the start of each train step, rather than at the end. Useful if you want to inspect/log the gradients via custom callback.
474
475
475
476
Note:
476
477
Certain strategies, like :class:`~torchtnt.utils.prepare_module.FSDPStrategy` also support mixed precision as an argument, so can be configured through that class as well.
0 commit comments