You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary:
Pull Request resolved: #875
# Context
Users cannot disable prefetch in auto unit
# This diff
Adds `enable_prefetch` flag to auto unit which can be used to disable if needed.
Reviewed By: galrotem
Differential Revision: D59980065
fbshipit-source-id: 2a2f2f802b8d084d495a961839773f89c8d82022
AutoPredictUnit is a convenience for users who are running inference and would like to have certain features handled for them, such as:
@@ -325,6 +334,7 @@ def __init__(
325
334
precision=precision,
326
335
torch_compile_params=torch_compile_params,
327
336
detect_anomaly=detect_anomaly,
337
+
enable_prefetch=enable_prefetch,
328
338
)
329
339
self.module: torch.nn.Module=prepare_module(
330
340
module,
@@ -435,9 +445,10 @@ class AutoUnit(
435
445
training: if True, the optimizer and optionally LR scheduler will be created after the class is initialized.
436
446
enable_compiled_autograd: if True, `compiled_autograd` will be used to compile the backward, this is an experimental flag.
437
447
loss_backward_retain_graph: If ``None`` or ``False``, the graph used to compute
438
-
the grads will be freed during loss backward pass. Note that in nearly all cases setting
439
-
this option to True is not needed and often can be worked around
440
-
in a much more efficient way.
448
+
the grads will be freed during loss backward pass. Note that in nearly all cases setting
449
+
this option to True is not needed and often can be worked around
450
+
in a much more efficient way.
451
+
enable_prefetch: if True, the data will be prefetched to the device before the next batch is loaded
441
452
442
453
Note:
443
454
Certain strategies, like :class:`~torchtnt.utils.prepare_module.FSDPStrategy` also support mixed precision as an argument, so can be configured through that class as well.
0 commit comments