Skip to content

Commit 5555e27

Browse files
committed
fixing warning
1 parent 9735fd5 commit 5555e27

File tree

1 file changed

+3
-2
lines changed

1 file changed

+3
-2
lines changed

articles/machine-learning/reference-checkpoint-performance-for-large-models.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ With Nebula you can:
8484
Nebula provides a fast, easy checkpoint experience, right in your existing training script.
8585
The steps to quick start Nebula include:
8686
- [Using ACPT environment](#using-acpt-environment)
87-
- [Initializing Nebula](#initializing-nebulaml)
87+
- [Initializing Nebula](#initializing-nebula)
8888
- [Calling APIs to save and load checkpoints](#calling-apis-to-save-and-load-checkpoints)
8989

9090
### Using ACPT environment
@@ -104,7 +104,8 @@ Nebula needs initialization to run in your training script. At the initializatio
104104

105105
Nebula has been integrated into DeepSpeed and PyTorch Lightning. As a result, initialization becomes simple and easy. These [examples](#examples) show how to integrate Nebula into your training scripts.
106106

107-
> [!IMPORTANT] Saving checkpoints with Nebula requires some memory to store checkpoints. Please make sure your memory is larger than at least three copies of the checkpoints.
107+
> [!IMPORTANT]
108+
> Saving checkpoints with Nebula requires some memory to store checkpoints. Please make sure your memory is larger than at least three copies of the checkpoints.
108109
>
109110
> If the memory is not enough to hold checkpoints, you are suggested to set up an environment variable `NEBULA_MEMORY_BUFFER_SIZE` in the command to limit the use of the memory per each node when saving checkpoints. When setting this variable, Nebula will use this memory as buffer to save checkpoints. If the memory usage is not limited, Nebula will use the memory as much as possible to store the checkpoints.
110111
>

0 commit comments

Comments
 (0)