Skip to content

Commit cc50a68

Browse files
committed
Spelling corrections to MLP tutorials
1 parent 9462399 commit cc50a68

File tree

6 files changed

+9
-9
lines changed

6 files changed

+9
-9
lines changed

docs/access/jupyterlab.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@ Examples of notebooks with `ipcmagic` can be found [here](https://github.com/
199199

200200
While it is generally recommended to submit long-running machine learning training and inference jobs via `sbatch`, certain use cases can benefit from an interactive Jupyter environment.
201201

202-
A popular approach to run multi-GPU ML workloads is with [`accelerate`](https://github.com/huggingface/accelerate) and [`torchrun`](https://docs.pytorch.org/docs/stable/elastic/run.html) as demonstrated in the [tutorials][ref-guides-mlp-tutorials]. In particular, the `accelerate launch` script in the [LLM fine-tuning tutorial][ref-mlp-llm-finetuning-tutorial] can be directly carried over to a Jupyter cell with a `%%bash` header (to run its contents interpreted by bash). For `torchrun`, one can adapt the command from the multi-node [nanotron tutorial][ref-mlp-llm-nanotron-tutorial] to run on a single GH200 node using the following line in a Jupyter cell
202+
A popular approach to run multi-GPU ML workloads is with [`accelerate`](https://github.com/huggingface/accelerate) and [`torchrun`](https://docs.pytorch.org/docs/stable/elastic/run.html) as demonstrated in the [tutorials][ref-guides-mlp-tutorials]. In particular, the `accelerate launch` script in the [LLM fine-tuning tutorial][ref-mlp-llm-fine-tuning-tutorial] can be directly carried over to a Jupyter cell with a `%%bash` header (to run its contents interpreted by bash). For `torchrun`, one can adapt the command from the multi-node [nanotron tutorial][ref-mlp-llm-nanotron-tutorial] to run on a single GH200 node using the following line in a Jupyter cell
203203

204204
```bash
205205
!python -m torch.distributed.run --standalone --nproc_per_node=4 run_train.py ...

docs/guides/mlp_tutorials/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
These tutorials solve simple MLP tasks using the [Container Engine][ref-container-engine] on the ML Platform.
55

66
1. [LLM Inference][ref-mlp-llm-inference-tutorial]
7-
2. [LLM Finetuning][ref-mlp-llm-finetuning-tutorial]
7+
2. [LLM Fine-tuning][ref-mlp-llm-fine-tuning-tutorial]
88
3. [Nanotron Training][ref-mlp-llm-nanotron-tutorial]
99

1010

docs/guides/mlp_tutorials/llm-finetuning.md renamed to docs/guides/mlp_tutorials/llm-fine-tuning.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
[](){#ref-mlp-llm-finetuning-tutorial}
1+
[](){#ref-mlp-llm-fine-tuning-tutorial}
22

3-
# LLM Finetuning Tutorial
3+
# LLM Fine-tuning Tutorial
44

5-
This tutorial will take the model from the [LLM Inference][ref-mlp-llm-inference-tutorial] tutorial and show you how to perform finetuning.
5+
This tutorial will take the model from the [LLM Inference][ref-mlp-llm-inference-tutorial] tutorial and show you how to perform fine-tuning.
66
This means that we take the model and train it on some new custom data to change its behavior.
77

88
To complete the tutorial, we set up some extra libraries that will help us to update the state of the machine learning model.

docs/guides/mlp_tutorials/llm-nanotron-training.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@
55
In this tutorial, we will build a container image to run nanotron training jobs.
66
We will train a 109M parameter model with ~100M wikitext tokens as a proof of concept.
77

8-
### Prequisites
8+
### Prerequisites
99

10-
It is also recommended to follow the previous tutorials: [LLM Inference][ref-mlp-llm-inference-tutorial] and [LLM Finetuning][ref-mlp-llm-finetuning-tutorial], as this will build up from it.
10+
It is also recommended to follow the previous tutorials: [LLM Inference][ref-mlp-llm-inference-tutorial] and [LLM Fine-tuning][ref-mlp-llm-fine-tuning-tutorial], as this will build up from it.
1111

1212
### Set up Podman
1313

docs/platforms/mlp/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,4 +91,4 @@ Project is per project - each project gets a project folder with project-specifi
9191

9292
## Guides and tutorials
9393

94-
Tutorials for finetuning and running inference of LLMs as well as training an LLM with Nanotron can be found in the [MLP Tutorials][ref-guides-mlp-tutorials] page.
94+
Tutorials for fine-tuning and running inference of LLMs as well as training an LLM with Nanotron can be found in the [MLP Tutorials][ref-guides-mlp-tutorials] page.

mkdocs.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ nav:
116116
- 'MLP Tutorials':
117117
- guides/mlp_tutorials/index.md
118118
- 'LLM Inference': guides/mlp_tutorials/llm-inference.md
119-
- 'LLM Finetuning': guides/mlp_tutorials/llm-finetuning.md
119+
- 'LLM Fine-tuning': guides/mlp_tutorials/llm-fine-tuning.md
120120
- 'LLM Training': guides/mlp_tutorials/llm-nanotron-training.md
121121
- 'Policies':
122122
- policies/index.md

0 commit comments

Comments
 (0)