You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This updates the existing MLP tutorials, to be merged with ML software
in a next step. Also for the future, a Megatron-LM tutorial with
recommended settings is left.
---------
Co-authored-by: Rocco Meli <[email protected]>
Copy file name to clipboardExpand all lines: docs/access/jupyterlab.md
+8-7Lines changed: 8 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ When resources are granted the page redirects to the JupyterLab session, where y
23
23
[](){#ref-jupyter-runtime-environment}
24
24
## Runtime environment
25
25
26
-
A Jupyter session can be started with either a [uenv][ref-uenv] or a [container][ref-container-engine] as a base image. The JupyterHub Spawner form provides a set of default images such as the [prgenv-gnu][ref-uenv-prgenv-gnu] uenv or the [NGC Pytorch container][ref-software-ml] to choose from in a dropdown menu. When using uenv, the software stack will be mounted at `/user-environment`, and the specified view will be activated. For a container, the Jupyter session will launch inside the container filesystem with only a select set of paths mounted from the host. Once you have found a suitable option, you can start the session with `Launch JupyterLab`.
26
+
A Jupyter session can be started with either a [uenv][ref-uenv] or a [container][ref-container-engine] as a base image. The JupyterHub Spawner form provides a set of default images such as the [prgenv-gnu][ref-uenv-prgenv-gnu] uenv or the [NGC PyTorch container][ref-software-ml] to choose from in a dropdown menu. When using uenv, the software stack will be mounted at `/user-environment`, and the specified view will be activated. For a container, the Jupyter session will launch inside the container filesystem with only a select set of paths mounted from the host. Once you have found a suitable option, you can start the session with `Launch JupyterLab`.
27
27
28
28
??? info "Using remote uenv for the first time."
29
29
If the uenv is not present in the local repository, it will be automatically fetched.
@@ -34,8 +34,8 @@ A Jupyter session can be started with either a [uenv][ref-uenv] or a [container]
34
34
35
35
If the default base images do not meet your requirements, you can specify a custom environment instead. For this purpose, you supply either a custom uenv image/view or [container engine (CE)][ref-container-engine] TOML file under the section `Advanced options` before launching the session. The supported uenvs are compatible with the Jupyter service out of the box, whereas container images typically require the installation of some additional packages.
36
36
37
-
??? "Example of a custom Pytorch container"
38
-
A container image based on recent a NGC Pytorch release requires the installation of the following additional packages to be compatible with the Jupyter service:
37
+
??? "Example of a custom PyTorch container"
38
+
A container image based on recent a NGC PyTorch release requires the installation of the following additional packages to be compatible with the Jupyter service:
39
39
40
40
```Dockerfile
41
41
FROM nvcr.io/nvidia/pytorch:25.05-py3
@@ -199,14 +199,14 @@ Examples of notebooks with `ipcmagic` can be found [here](https://github.com/
199
199
200
200
While it is generally recommended to submit long-running machine learning training and inference jobs via `sbatch`, certain use cases can benefit from an interactive Jupyter environment.
201
201
202
-
A popular approach to run multi-GPU ML workloads is with [`accelerate`](https://github.com/huggingface/accelerate) and [`torchrun`](https://docs.pytorch.org/docs/stable/elastic/run.html) as demonstrated in the [tutorials][ref-guides-mlp-tutorials]. In particular, the `accelerate launch` script in the [LLM fine-tuning tutorial][ref-mlp-llm-finetuning-tutorial] can be directly carried over to a Jupyter cell with a `%%bash` header (to run its contents interpreted by bash). For `torchrun`, one can adapt the command from the multi-node [nanotron tutorial][ref-mlp-llm-nanotron-tutorial] to run on a single GH200 node using the following line in a Jupyter cell
202
+
A popular approach to run multi-GPU ML workloads is with [`accelerate`](https://github.com/huggingface/accelerate) and [`torchrun`](https://docs.pytorch.org/docs/stable/elastic/run.html) as demonstrated in the [tutorials][ref-guides-mlp-tutorials]. In particular, the `accelerate launch` script in the [LLM fine-tuning tutorial][ref-mlp-llm-fine-tuning-tutorial] can be directly carried over to a Jupyter cell with a `%%bash` header (to run its contents interpreted by bash). For `torchrun`, one can adapt the command from the multi-node [nanotron tutorial][ref-mlp-llm-nanotron-tutorial] to run on a single GH200 node using the following line in a Jupyter cell
When using a virtual environment on top of a base image with Pytorch, always replace `torchrun` with `python -m torch.distributed.run` to pick up the correct Python environment. Otherwise, the system Python environment will be used and virtual environment packages not available. If not using virtual environments such as with a self-contained Pytorch container, `torchrun` is equivalent to `python -m torch.distributed.run`.
209
+
When using a virtual environment on top of a base image with PyTorch, always replace `torchrun` with `python -m torch.distributed.run` to pick up the correct Python environment. Otherwise, the system Python environment will be used and virtual environment packages will not available. If not using virtual environments such as with a self-contained PyTorch container, `torchrun` is equivalent to `python -m torch.distributed.run`.
210
210
211
211
!!! note "Notebook structure"
212
212
In none of these scenarios any significant memory allocations or background computations are performed on the main Jupyter process. Instead, the resources are kept available for the processes launched by `accelerate` or `torchrun`, respectively.
@@ -216,19 +216,20 @@ Alternatively to using these launchers, it is also possible to use Slurm to obta
where `/path/to/edf.toml` should be replaced by the TOML file and `train.py` is a script using `torch.distributed` for distributed training. This can be further customized with extra Slurm options.
226
+
where `/path/to/edf.toml` should be replaced by the TOML file and `venv-<base-image-version>` by the name of the virtual environment (if used). The script `train.py` is using `torch.distributed` for distributed training. This launch mechanism can be further customized with extra Slurm options.
226
227
227
228
!!! warning "Concurrent usage of resources"
228
229
Subtle bugs can occur when running multiple Jupyter notebooks concurrently that each assume access to the full node. Also, some notebooks may hold on to resources such as spawned child processes or allocated memory despite having completed. In this case, resources such as a GPU may still be busy, blocking another notebook from using it. Therefore, it is good practice to only keep one such notebook running that occupies the full node and restarting a kernel once a notebook has completed. If in doubt, system monitoring with `htop` and [nvdashboard](https://github.com/rapidsai/jupyterlab-nvdashboard) can be helpful for debugging.
229
230
230
231
!!! warning "Multi-GPU training from a shared Jupyter process"
231
-
Running multi-GPU training workloads directly from the shared Jupyter process is generally not recommended due to potential inefficiencies and correctness issues (cf. the [Pytorch docs](https://docs.pytorch.org/docs/stable/notes/cuda.html#use-nn-parallel-distributeddataparallel-instead-of-multiprocessing-or-nn-dataparallel)). However, if you need it to e.g. reproduce existing results, it is possible to do so with utilities like `accelerate`'s `notebook_launcher` or [`transformers`](https://github.com/huggingface/transformers)' `Trainer` class. When using these in containers, you will currently need to unset the environment variables `RANK` and `LOCAL_RANK`, that is have the following in a cell at the top of the notebook:
232
+
Running multi-GPU training workloads directly from the shared Jupyter process is generally not recommended due to potential inefficiencies and correctness issues (cf. the [PyTorch docs](https://docs.pytorch.org/docs/stable/notes/cuda.html#use-nn-parallel-distributeddataparallel-instead-of-multiprocessing-or-nn-dataparallel)). However, if you need it to e.g. reproduce existing results, it is possible to do so with utilities like `accelerate`'s `notebook_launcher` or [`transformers`](https://github.com/huggingface/transformers)' `Trainer` class. When using these in containers, you will currently need to unset the environment variables `RANK` and `LOCAL_RANK` by adding the following in a cell at the top of the notebook:
These tutorials gradually introduce key concepts of the Machine Learning Platform. A particular focus is on the [Container Engine][ref-container-engine] for managing the runtime environment.
9
5
6
+
In a [first tutorial][ref-mlp-llm-inference-tutorial], you will learn how to run inference with a LLM on a single node using a container from the NVIDIA GPU Cloud (NGC). Concepts such as container environment description, layering a thin virtual environment on top of the container image, and job launching and monitoring will be introduced.
10
7
8
+
Building on the first tutorial, in the [second tutorial][ref-mlp-llm-fine-tuning-tutorial] you will learn how to train (fine-tune) a LLM on multiple GPUs on a single node. For this purpose, you will use HuggingFace's `accelerate` and see best practices for dataset management.
11
9
10
+
In the [third tutorial][ref-mlp-llm-nanotron-tutorial], you will apply the techniques from the previous tutorials to enable distributed (pre-)training of a model in `nanotron` on multiple nodes. In particular, this tutorial makes use of model-parallelism and introduces the usage of `torchrun` to manage jobs on individual nodes.
Copy file name to clipboardExpand all lines: docs/guides/mlp_tutorials/llm-fine-tuning.md
+43-25Lines changed: 43 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
[](){#ref-mlp-llm-finetuning-tutorial}
1
+
[](){#ref-mlp-llm-fine-tuning-tutorial}
2
2
3
3
# LLM Fine-tuning Tutorial
4
4
@@ -8,45 +8,50 @@ This means that we take the model and train it on some new custom data to change
8
8
To complete the tutorial, we set up some extra libraries that will help us to update the state of the machine learning model.
9
9
We also write a script that will allow us to unlock more of the performance offered by the cluster, by running our fine-tuning task on two or more nodes.
10
10
11
+
## Fine-tuning Gemma 7B on the OpenAssistant dataset
12
+
11
13
### Prerequisites
12
14
13
15
This tutorial assumes you've already successfully completed the [LLM Inference][ref-mlp-llm-inference-tutorial] tutorial.
14
-
For fine-tuning Gemma, we will rely on the NGC PyTorch container and the libraries we've already installed in the Python environment used previously.
16
+
For fine-tuning Gemma, we will rely on the NGC PyTorch container and the libraries we've already installed in the Python virtual environment used previously.
15
17
16
18
### Set up TRL
17
19
18
-
We will use HuggingFace TRL to fine-tune Gemma-7B on the [OpenAssistant dataset](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25).
20
+
We will use HuggingFace TRL (Transformer Reinforcement Learning) to fine-tune Gemma-7B on the [OpenAssistant dataset](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25).
19
21
First, we need to update our Python environment with some extra libraries to support TRL.
20
22
To do this, we can launch an interactive shell in the PyTorch container, just like we did in the previous tutorial.
This script has quite a bit more content to unpack.
80
-
We use HuggingFace accelerate to launch the fine-tuning process, so we need to make sure that accelerate understands which hardware is available and where.
85
+
We use HuggingFace `accelerate` to launch the fine-tuning process, so we need to make sure that `accelerate` understands which hardware is available and where.
81
86
Setting this up will be useful in the long run because it means we can tell Slurm how much hardware to reserve, and this script will setup all the details for us.
82
87
83
88
The cluster has four GH200 chips per compute node.
84
-
We can make them accessible to scripts run through srun/sbatch via the option `--gpus-per-node=4`.
89
+
We can make them accessible to scripts run through `srun`/`sbatch` via the option `--gpus-per-node=4`.
85
90
Then, we calculate how many processes accelerate should launch.
86
91
We want to map each GPU to a separate process, this should be four processes per node.
87
92
We multiply this by the number of nodes to obtain the total number of processes.
88
93
Next, we use some bash magic to extract the name of the head node from Slurm environment variables.
89
-
Accelerate expects one main node and launches tasks on the other nodes from this main node.
94
+
`accelerate` expects one main node and launches tasks on the other nodes from this main node.
90
95
Having sourced our python environment at the top of the script, we can then launch Gemma fine-tuning.
91
-
The first four lines of the launch line are used to configure accelerate.
96
+
The first four lines of the launch line are used to configure `accelerate`.
92
97
Everything after that configures the `trl/examples/scripts/sft.py` Python script, which we use to train Gemma.
93
98
99
+
!!! note "Dataset management and sharing"
100
+
For datasets, recommended LUSTRE settings should be used as illustrated in the tutorial on [LLM Inference][ref-mlp-llm-inference-tutorial]. As they have been set there for `HF_HOME`, which `huggingface_hub` uses for its dataset cache, they don't need to be re-applied here.
101
+
102
+
To enable your colleagues to use also use your datasets, please refer to the [storage guide][ref-guides-storage-sharing].
If we re-run inference, the output will be a bit more detailed and explanatory, similar to output we might expect from a helpful chatbot. One example looks like this:
0 commit comments