Skip to content

Commit 6966fa1

Browse files
Fix typos . (#36551)
Signed-off-by: zhanluxianshen <[email protected]>
1 parent 996f512 commit 6966fa1

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

src/transformers/integrations/integration_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1218,7 +1218,7 @@ def setup(self, args, state, model):
12181218
Whether to use MLflow nested runs. If set to `True` or *1*, will create a nested run inside the current
12191219
run.
12201220
- **MLFLOW_RUN_ID** (`str`, *optional*):
1221-
Allow to reattach to an existing run which can be usefull when resuming training from a checkpoint. When
1221+
Allow to reattach to an existing run which can be useful when resuming training from a checkpoint. When
12221222
`MLFLOW_RUN_ID` environment variable is set, `start_run` attempts to resume a run with the specified run ID
12231223
and other parameters are ignored.
12241224
- **MLFLOW_FLATTEN_PARAMS** (`str`, *optional*, defaults to `False`):

src/transformers/models/nllb_moe/configuration_nllb_moe.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ class NllbMoeConfig(PretrainedConfig):
100100
experts.
101101
router_bias (`bool`, *optional*, defaults to `False`):
102102
Whether or not the classifier of the router should have a bias.
103-
moe_token_dropout (`float`, *optional*, defualt ot 0.2):
103+
moe_token_dropout (`float`, *optional*, default to 0.2):
104104
Masking rate for MoE expert output masking (EOM), which is implemented via a Dropout2d on the expert
105105
outputs.
106106
output_router_logits (`bool`, *optional*, defaults to `False`):

0 commit comments

Comments
 (0)