Skip to content

Commit dcf4a6b

Browse files
authored
Fix description for auto batch size (#4274)
fix description
1 parent 26a681d commit dcf4a6b

File tree

5 files changed

+5
-5
lines changed

5 files changed

+5
-5
lines changed

src/otx/tools/templates/detection/detection/configuration.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -187,7 +187,7 @@ learning_parameters:
187187
auto_adapt_batch_size:
188188
affects_outcome_of: TRAINING
189189
default_value: None
190-
description: Safe => Prevent GPU out of memory. Full => Find a batch size using most of GPU memory.
190+
description: Safe - Prevent GPU from running out of memory. Full - Find a batch size using most of GPU memory.
191191
editable: true
192192
enum_name: BatchSizeAdaptType
193193
header: Decrease batch size if current batch size isn't fit to CUDA memory.

src/otx/tools/templates/detection/instance_segmentation/configuration.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -187,7 +187,7 @@ learning_parameters:
187187
auto_adapt_batch_size:
188188
affects_outcome_of: TRAINING
189189
default_value: Safe
190-
description: Safe => Prevent GPU out of memory. Full => Find a batch size using most of GPU memory.
190+
description: Safe - Prevent GPU from running out of memory. Full - Find a batch size using most of GPU memory.
191191
editable: true
192192
enum_name: BatchSizeAdaptType
193193
header: Decrease batch size if current batch size isn't fit to CUDA memory.

src/otx/tools/templates/detection/rotated_detection/configuration.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -187,7 +187,7 @@ learning_parameters:
187187
auto_adapt_batch_size:
188188
affects_outcome_of: TRAINING
189189
default_value: Safe
190-
description: Safe => Prevent GPU out of memory. Full => Find a batch size using most of GPU memory.
190+
description: Safe - Prevent GPU from running out of memory. Full - Find a batch size using most of GPU memory.
191191
editable: true
192192
enum_name: BatchSizeAdaptType
193193
header: Decrease batch size if current batch size isn't fit to CUDA memory.

src/otx/tools/templates/keypoint_detection/configuration.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -203,7 +203,7 @@ learning_parameters:
203203
auto_adapt_batch_size:
204204
affects_outcome_of: TRAINING
205205
default_value: None
206-
description: Safe => Prevent GPU out of memory. Full => Find a batch size using most of GPU memory.
206+
description: Safe - Prevent GPU from running out of memory. Full - Find a batch size using most of GPU memory.
207207
editable: true
208208
enum_name: BatchSizeAdaptType
209209
header: Decrease batch size if current batch size isn't fit to CUDA memory.

src/otx/tools/templates/segmentation/configuration.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -169,7 +169,7 @@ learning_parameters:
169169
auto_adapt_batch_size:
170170
affects_outcome_of: TRAINING
171171
default_value: Safe
172-
description: Safe => Prevent GPU out of memory. Full => Find a batch size using most of GPU memory.
172+
description: Safe - Prevent GPU from running out of memory. Full - Find a batch size using most of GPU memory.
173173
editable: true
174174
enum_name: BatchSizeAdaptType
175175
header: Decrease batch size if current batch size isn't fit to CUDA memory.

0 commit comments

Comments
 (0)