Skip to content

Commit 6ca5ac9

Browse files
[docs] Update config doc strings to help users understand how it will be used.
PiperOrigin-RevId: 398154690
1 parent b6fbd2d commit 6ca5ac9

File tree

1 file changed

+6
-4
lines changed

1 file changed

+6
-4
lines changed

official/core/config_definitions.py

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,8 @@
1414

1515
"""Common configuration settings."""
1616

17-
from typing import Optional, Sequence, Union
18-
1917
import dataclasses
18+
from typing import Optional, Sequence, Union
2019

2120
from official.modeling.hyperparams import base_config
2221
from official.modeling.optimization.configs import optimization_config
@@ -41,7 +40,9 @@ class DataConfig(base_config.Config):
4140
tfds_split: A str indicating which split of the data to load from TFDS. It
4241
is required when above `tfds_name` is specified.
4342
global_batch_size: The global batch size across all replicas.
44-
is_training: Whether this data is used for training or not.
43+
is_training: Whether this data is used for training or not. This flag is
44+
useful for consumers of this object to determine whether the data should
45+
be repeated or shuffled.
4546
drop_remainder: Whether the last batch should be dropped in the case it has
4647
fewer than `global_batch_size` elements.
4748
shuffle_buffer_size: The buffer size used for shuffling training data.
@@ -178,7 +179,8 @@ class TrainerConfig(base_config.Config):
178179
eval_tf_function: whether or not to use tf_function for eval.
179180
allow_tpu_summary: Whether to allow summary happen inside the XLA program
180181
runs on TPU through automatic outside compilation.
181-
steps_per_loop: number of steps per loop.
182+
steps_per_loop: number of steps per loop to report training metrics. This
183+
can also be used to reduce host worker communication in a TPU setup.
182184
summary_interval: number of steps between each summary.
183185
checkpoint_interval: number of steps between checkpoints.
184186
max_to_keep: max checkpoints to keep.

0 commit comments

Comments
 (0)