Skip to content

Commit 86ca7ba

Browse files
committed
Change ml to nm in specific places . . .
1 parent 9ce6e5e commit 86ca7ba

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

articles/machine-learning/reference-checkpoint-performance-with-Nebula.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,7 @@ To enable full Nebula compatibility with PyTorch-based training scripts, modify
170170
## List all checkpoints
171171
ckpts = nm.list_checkpoints()
172172
## Get Latest checkpoint path
173-
latest_ckpt_path = ml.get_latest_checkpoint_path("checkpoint", persisted_storage_path)
173+
latest_ckpt_path = nm.get_latest_checkpoint_path("checkpoint", persisted_storage_path)
174174
```
175175

176176
# [Using DeepSpeed](#tab/DEEPSPEED)
@@ -205,16 +205,16 @@ latest_ckpt_path = ml.get_latest_checkpoint_path("checkpoint", persisted_storage
205205
config_params["persistent_storage_path"] = "<YOUR STORAGE PATH>"
206206
config_params["persistent_time_interval"] = 10
207207

208-
nebula_checkpoint_callback = ml.NebulaCallback(
208+
nebula_checkpoint_callback = nm.NebulaCallback(
209209
****, # Original ModelCheckpoint params
210210
config_params=config_params, # customize the config of init nebula
211211
)
212212
```
213213

214-
Next, add `ml.NebulaCheckpointIO()` as a plugin to your `Trainer`, and modify the `trainer.save_checkpoint()` storage parameters as shown:
214+
Next, add `nm.NebulaCheckpointIO()` as a plugin to your `Trainer`, and modify the `trainer.save_checkpoint()` storage parameters as shown:
215215

216216
```python
217-
trainer = Trainer(plugins=[ml.NebulaCheckpointIO()], # add NebulaCheckpointIO as a plugin
217+
trainer = Trainer(plugins=[nm.NebulaCheckpointIO()], # add NebulaCheckpointIO as a plugin
218218
callbacks=[nebula_checkpoint_callback]) # use NebulaCallback as a plugin
219219
```
220220

0 commit comments

Comments
 (0)