Skip to content

Commit b9e7c1f

Browse files
JKSenthilfacebook-github-bot
authored andcommitted
enable storage optimization by default (#921)
Summary: Pull Request resolved: #921 Let's enable storage optimizations by default in TorchTNT in preparation of upcoming DCP guidance post We don't need to change anything in Mitra as the default knob option sets it to False: https://www.internalfb.com/code/fbsource/[99acb2db7d2b]/fbcode/content_understanding/framework/training/types.py?lines=40-47 Reviewed By: saumishr Differential Revision: D64205385 fbshipit-source-id: 076f42ecbf04a5dd36bd8be5946991978944ee03
1 parent 1beb1f0 commit b9e7c1f

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

torchtnt/framework/callbacks/checkpointer_types.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,9 @@ class KnobOptions:
2525
# use a more conservative number of concurrent IO operations per rank in Checkpointing
2626
# the default value of 16 is too bandwidth hungry for most users
2727
max_per_rank_io_concurrency: Optional[int] = None
28-
# This is a no-op and for future use. This would enable storage efficiency optimizations:
28+
# This would enable storage efficiency optimizations (model store):
2929
# e.g. Compression, Batching, Quantization etc.
30-
enable_storage_optimization: bool = False
30+
enable_storage_optimization: bool = True
3131

3232

3333
@dataclass

0 commit comments

Comments
 (0)