Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
cc50a68
Spelling corrections to MLP tutorials
lukasgd Jul 9, 2025
01fe8e5
Updated MLP tutorials
lukasgd Jul 22, 2025
d37d03c
Merge branch 'main' into mlp-tutorials-update
lukasgd Jul 22, 2025
a10be36
Update docs/access/jupyterlab.md
lukasgd Jul 28, 2025
6017a7e
Apply suggestions from code review
lukasgd Jul 28, 2025
e96ee0f
Update docs/access/jupyterlab.md
lukasgd Jul 28, 2025
122c3ff
Update docs/guides/mlp_tutorials/llm-inference.md
lukasgd Jul 28, 2025
d0476fd
Update docs/guides/mlp_tutorials/llm-inference.md
lukasgd Jul 28, 2025
fb22a00
Update docs/guides/mlp_tutorials/llm-inference.md
lukasgd Jul 28, 2025
758d019
Update docs/guides/mlp_tutorials/llm-inference.md
lukasgd Jul 28, 2025
0e0285f
Update docs/guides/mlp_tutorials/llm-inference.md
lukasgd Jul 28, 2025
4a74f96
Update docs/guides/mlp_tutorials/llm-nanotron-training.md
lukasgd Jul 28, 2025
fb1629f
Update docs/guides/mlp_tutorials/llm-nanotron-training.md
lukasgd Jul 28, 2025
691b11f
Update docs/guides/mlp_tutorials/llm-nanotron-training.md
lukasgd Jul 28, 2025
80f2c19
Update docs/guides/mlp_tutorials/index.md
lukasgd Jul 28, 2025
b66b0eb
Using console instead of bash with hostnames in the shell prompt and …
lukasgd Jul 28, 2025
e0644a3
Merge branch 'main' into mlp-tutorials-update
lukasgd Jul 28, 2025
404a203
Integrating @Madeeks comment
lukasgd Jul 28, 2025
6b56fb4
Update docs/guides/mlp_tutorials/llm-inference.md
lukasgd Jul 28, 2025
2b7f549
Update docs/guides/mlp_tutorials/llm-inference.md
lukasgd Jul 28, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 8 additions & 7 deletions docs/access/jupyterlab.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ When resources are granted the page redirects to the JupyterLab session, where y
[](){#ref-jupyter-runtime-environment}
## Runtime environment

A Jupyter session can be started with either a [uenv][ref-uenv] or a [container][ref-container-engine] as a base image. The JupyterHub Spawner form provides a set of default images such as the [prgenv-gnu][ref-uenv-prgenv-gnu] uenv or the [NGC Pytorch container][ref-software-ml] to choose from in a dropdown menu. When using uenv, the software stack will be mounted at `/user-environment`, and the specified view will be activated. For a container, the Jupyter session will launch inside the container filesystem with only a select set of paths mounted from the host. Once you have found a suitable option, you can start the session with `Launch JupyterLab`.
A Jupyter session can be started with either a [uenv][ref-uenv] or a [container][ref-container-engine] as a base image. The JupyterHub Spawner form provides a set of default images such as the [prgenv-gnu][ref-uenv-prgenv-gnu] uenv or the [NGC PyTorch container][ref-software-ml] to choose from in a dropdown menu. When using uenv, the software stack will be mounted at `/user-environment`, and the specified view will be activated. For a container, the Jupyter session will launch inside the container filesystem with only a select set of paths mounted from the host. Once you have found a suitable option, you can start the session with `Launch JupyterLab`.

??? info "Using remote uenv for the first time."
If the uenv is not present in the local repository, it will be automatically fetched.
Expand All @@ -34,8 +34,8 @@ A Jupyter session can be started with either a [uenv][ref-uenv] or a [container]

If the default base images do not meet your requirements, you can specify a custom environment instead. For this purpose, you supply either a custom uenv image/view or [container engine (CE)][ref-container-engine] TOML file under the section `Advanced options` before launching the session. The supported uenvs are compatible with the Jupyter service out of the box, whereas container images typically require the installation of some additional packages.

??? "Example of a custom Pytorch container"
A container image based on recent a NGC Pytorch release requires the installation of the following additional packages to be compatible with the Jupyter service:
??? "Example of a custom PyTorch container"
A container image based on recent a NGC PyTorch release requires the installation of the following additional packages to be compatible with the Jupyter service:

```Dockerfile
FROM nvcr.io/nvidia/pytorch:25.05-py3
Expand Down Expand Up @@ -199,14 +199,14 @@ Examples of notebooks with `ipcmagic` can be found [here](https://github.com/

While it is generally recommended to submit long-running machine learning training and inference jobs via `sbatch`, certain use cases can benefit from an interactive Jupyter environment.

A popular approach to run multi-GPU ML workloads is with [`accelerate`](https://github.com/huggingface/accelerate) and [`torchrun`](https://docs.pytorch.org/docs/stable/elastic/run.html) as demonstrated in the [tutorials][ref-guides-mlp-tutorials]. In particular, the `accelerate launch` script in the [LLM fine-tuning tutorial][ref-mlp-llm-finetuning-tutorial] can be directly carried over to a Jupyter cell with a `%%bash` header (to run its contents interpreted by bash). For `torchrun`, one can adapt the command from the multi-node [nanotron tutorial][ref-mlp-llm-nanotron-tutorial] to run on a single GH200 node using the following line in a Jupyter cell
A popular approach to run multi-GPU ML workloads is with [`accelerate`](https://github.com/huggingface/accelerate) and [`torchrun`](https://docs.pytorch.org/docs/stable/elastic/run.html) as demonstrated in the [tutorials][ref-guides-mlp-tutorials]. In particular, the `accelerate launch` script in the [LLM fine-tuning tutorial][ref-mlp-llm-fine-tuning-tutorial] can be directly carried over to a Jupyter cell with a `%%bash` header (to run its contents interpreted by bash). For `torchrun`, one can adapt the command from the multi-node [nanotron tutorial][ref-mlp-llm-nanotron-tutorial] to run on a single GH200 node using the following line in a Jupyter cell

```bash
!python -m torch.distributed.run --standalone --nproc_per_node=4 run_train.py ...
```

!!! warning "torchrun with virtual environments"
When using a virtual environment on top of a base image with Pytorch, always replace `torchrun` with `python -m torch.distributed.run` to pick up the correct Python environment. Otherwise, the system Python environment will be used and virtual environment packages not available. If not using virtual environments such as with a self-contained Pytorch container, `torchrun` is equivalent to `python -m torch.distributed.run`.
When using a virtual environment on top of a base image with PyTorch, always replace `torchrun` with `python -m torch.distributed.run` to pick up the correct Python environment. Otherwise, the system Python environment will be used and virtual environment packages will not available. If not using virtual environments such as with a self-contained PyTorch container, `torchrun` is equivalent to `python -m torch.distributed.run`.

!!! note "Notebook structure"
In none of these scenarios any significant memory allocations or background computations are performed on the main Jupyter process. Instead, the resources are kept available for the processes launched by `accelerate` or `torchrun`, respectively.
Expand All @@ -216,19 +216,20 @@ Alternatively to using these launchers, it is also possible to use Slurm to obta
```bash
!srun --overlap -ul --environment /path/to/edf.toml \
--container-workdir $PWD -n 4 bash -c "\
. venv-<base-image-version>/bin/activate
MASTER_ADDR=\$(scontrol show hostnames \$SLURM_JOB_NODELIST | head -n 1) \
MASTER_PORT=29500 \
RANK=\$SLURM_PROCID LOCAL_RANK=\$SLURM_LOCALID WORLD_SIZE=\$SLURM_NPROCS \
python train.py ..."
```

where `/path/to/edf.toml` should be replaced by the TOML file and `train.py` is a script using `torch.distributed` for distributed training. This can be further customized with extra Slurm options.
where `/path/to/edf.toml` should be replaced by the TOML file and `venv-<base-image-version>` by the name of the virtual environment (if used). The script `train.py` is using `torch.distributed` for distributed training. This launch mechanism can be further customized with extra Slurm options.

!!! warning "Concurrent usage of resources"
Subtle bugs can occur when running multiple Jupyter notebooks concurrently that each assume access to the full node. Also, some notebooks may hold on to resources such as spawned child processes or allocated memory despite having completed. In this case, resources such as a GPU may still be busy, blocking another notebook from using it. Therefore, it is good practice to only keep one such notebook running that occupies the full node and restarting a kernel once a notebook has completed. If in doubt, system monitoring with `htop` and [nvdashboard](https://github.com/rapidsai/jupyterlab-nvdashboard) can be helpful for debugging.

!!! warning "Multi-GPU training from a shared Jupyter process"
Running multi-GPU training workloads directly from the shared Jupyter process is generally not recommended due to potential inefficiencies and correctness issues (cf. the [Pytorch docs](https://docs.pytorch.org/docs/stable/notes/cuda.html#use-nn-parallel-distributeddataparallel-instead-of-multiprocessing-or-nn-dataparallel)). However, if you need it to e.g. reproduce existing results, it is possible to do so with utilities like `accelerate`'s `notebook_launcher` or [`transformers`](https://github.com/huggingface/transformers)' `Trainer` class. When using these in containers, you will currently need to unset the environment variables `RANK` and `LOCAL_RANK`, that is have the following in a cell at the top of the notebook:
Running multi-GPU training workloads directly from the shared Jupyter process is generally not recommended due to potential inefficiencies and correctness issues (cf. the [PyTorch docs](https://docs.pytorch.org/docs/stable/notes/cuda.html#use-nn-parallel-distributeddataparallel-instead-of-multiprocessing-or-nn-dataparallel)). However, if you need it to e.g. reproduce existing results, it is possible to do so with utilities like `accelerate`'s `notebook_launcher` or [`transformers`](https://github.com/huggingface/transformers)' `Trainer` class. When using these in containers, you will currently need to unset the environment variables `RANK` and `LOCAL_RANK` by adding the following in a cell at the top of the notebook:

```python
import os; os.environ.pop("RANK"); os.environ.pop("LOCAL_RANK");
Expand Down
11 changes: 5 additions & 6 deletions docs/guides/mlp_tutorials/index.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
[](){#ref-guides-mlp-tutorials}
# MLP Tutorials
# Machine Learning Platform Tutorials

These tutorials solve simple MLP tasks using the [Container Engine][ref-container-engine] on the ML Platform.

1. [LLM Inference][ref-mlp-llm-inference-tutorial]
2. [LLM Fine-tuning][ref-mlp-llm-finetuning-tutorial]
3. [Nanotron Training][ref-mlp-llm-nanotron-tutorial]
These tutorials gradually introduce key concepts of the Machine Learning Platform. A particular focus is on the [Container Engine][ref-container-engine] for managing the runtime environment.

In a [first tutorial][ref-mlp-llm-inference-tutorial], you will learn how to run inference with a LLM on a single node using a container from the NVIDIA GPU Cloud (NGC). Concepts such as container environment description, layering a thin virtual environment on top of the container image, and job launching and monitoring will be introduced.

Building on the first tutorial, in the [second tutorial][ref-mlp-llm-fine-tuning-tutorial] you will learn how to train (fine-tune) a LLM on multiple GPUs on a single node. For this purpose, you will use HuggingFace's `accelerate` and see best practices for dataset management.

In the [third tutorial][ref-mlp-llm-nanotron-tutorial], you will apply the techniques from the previous tutorials to enable distributed (pre-)training of a model in `nanotron` on multiple nodes. In particular, this tutorial makes use of model-parallelism and introduces the usage of `torchrun` to manage jobs on individual nodes.
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
[](){#ref-mlp-llm-finetuning-tutorial}
[](){#ref-mlp-llm-fine-tuning-tutorial}

# LLM Fine-tuning Tutorial

Expand All @@ -8,45 +8,50 @@ This means that we take the model and train it on some new custom data to change
To complete the tutorial, we set up some extra libraries that will help us to update the state of the machine learning model.
We also write a script that will allow us to unlock more of the performance offered by the cluster, by running our fine-tuning task on two or more nodes.

## Fine-tuning Gemma 7B on the OpenAssistant dataset

### Prerequisites

This tutorial assumes you've already successfully completed the [LLM Inference][ref-mlp-llm-inference-tutorial] tutorial.
For fine-tuning Gemma, we will rely on the NGC PyTorch container and the libraries we've already installed in the Python environment used previously.
For fine-tuning Gemma, we will rely on the NGC PyTorch container and the libraries we've already installed in the Python virtual environment used previously.

### Set up TRL

We will use HuggingFace TRL to fine-tune Gemma-7B on the [OpenAssistant dataset](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25).
We will use HuggingFace TRL (Transformer Reinforcement Learning) to fine-tune Gemma-7B on the [OpenAssistant dataset](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25).
First, we need to update our Python environment with some extra libraries to support TRL.
To do this, we can launch an interactive shell in the PyTorch container, just like we did in the previous tutorial.
Then, we install `peft`:

```console
$ cd $SCRATCH/gemma-inference
$ srun --environment=gemma-pytorch --container-workdir=$PWD --pty bash
$ source ./gemma-venv/bin/activate
$ python -m pip install peft==0.11.1
[clariden-lnXXX]$ cd $SCRATCH/tutorials/gemma-7b
[clariden-lnXXX]$ srun --environment=./ngc-pytorch-gemma-24.01.toml --pty bash
user@nidYYYYYY$ source venv-gemma-24.01/bin/activate
(venv-gemma-24.01) user@nidYYYYYY$ pip install peft==0.11.1
```

Next, we also need to clone and install the `trl` Git repository so that we have access to the fine-tuning scripts in it.
For this purpose, we will install the package in editable mode in the virtual environment.
This makes it available in python scripts independent of the current working directory and without creating a redundant copy of the files.

```console
$ git clone https://github.com/huggingface/trl -b v0.7.11
$ pip install -e ./trl # install in editable mode
(venv-gemma-24.01) user@nidYYYYYY$ git clone \
https://github.com/huggingface/trl -b v0.7.11
(venv-gemma-24.01) user@nidYYYYYY$ pip install -e ./trl # (1)!
```

1. Installs trl in editable mode

When this step is complete, you can exit the shell by typing `exit`.

### Fine-tune Gemma-7B

t this point, we can set up a fine-tuning script and start training Gemma-7B.
Use your favorite text editor to create the file `fine-tune-gemma.sh` just outside the `trl` and `gemma-venv` directories:
At this point, we can set up a fine-tuning script and start training Gemma-7B.
Use your favorite text editor to create the file `fine-tune-gemma.sh` just outside the `trl` and `venv-gemma-24.01` directories:

```bash title="fine-tune-gemma.sh"
```bash title="$SCRATCH/tutorials/gemma-7b/fine-tune-gemma.sh"
#!/bin/bash

source ./gemma-venv/bin/activate
source venv-gemma-24.01/bin/activate

set -x

Expand All @@ -73,38 +78,50 @@ accelerate launch --config_file trl/examples/accelerate_configs/multi_gpu.yaml \
--use_peft \
--lora_r 16 --lora_alpha 32 \
--lora_target_modules q_proj k_proj v_proj o_proj \
--output_dir gemma-finetuned-openassistant
--output_dir gemma-fine-tuned-openassistant
```

This script has quite a bit more content to unpack.
We use HuggingFace accelerate to launch the fine-tuning process, so we need to make sure that accelerate understands which hardware is available and where.
We use HuggingFace `accelerate` to launch the fine-tuning process, so we need to make sure that `accelerate` understands which hardware is available and where.
Setting this up will be useful in the long run because it means we can tell Slurm how much hardware to reserve, and this script will setup all the details for us.

The cluster has four GH200 chips per compute node.
We can make them accessible to scripts run through srun/sbatch via the option `--gpus-per-node=4`.
We can make them accessible to scripts run through `srun`/`sbatch` via the option `--gpus-per-node=4`.
Then, we calculate how many processes accelerate should launch.
We want to map each GPU to a separate process, this should be four processes per node.
We multiply this by the number of nodes to obtain the total number of processes.
Next, we use some bash magic to extract the name of the head node from Slurm environment variables.
Accelerate expects one main node and launches tasks on the other nodes from this main node.
`accelerate` expects one main node and launches tasks on the other nodes from this main node.
Having sourced our python environment at the top of the script, we can then launch Gemma fine-tuning.
The first four lines of the launch line are used to configure accelerate.
The first four lines of the launch line are used to configure `accelerate`.
Everything after that configures the `trl/examples/scripts/sft.py` Python script, which we use to train Gemma.

!!! note "Dataset management and sharing"
For datasets, recommended LUSTRE settings should be used as illustrated in the tutorial on [LLM Inference][ref-mlp-llm-inference-tutorial]. As they have been set there for `HF_HOME`, which `huggingface_hub` uses for its dataset cache, they don't need to be re-applied here.

To enable your colleagues to use also use your datasets, please refer to the [storage guide][ref-guides-storage-sharing].

Make this script executable with

```console
[clariden-lnXXX]$ chmod u+x $SCRATCH/tutorials/gemma-7b/fine-tune-gemma.sh
```

Next, we also need to create a short Slurm batch script to launch our fine-tuning script:

```bash title="fine-tune-sft.sbatch"
```bash title="$SCRATCH/tutorials/gemma-7b/submit-fine-tune-gemma.sh"
#!/bin/bash
#SBATCH --job-name=gemma-finetune
#SBATCH --account=<ACCOUNT>
#SBATCH --job-name=fine-tune-gemma
#SBATCH --time=00:30:00
#SBATCH --ntasks-per-node=1
#SBATCH --gpus-per-node=4
#SBATCH --cpus-per-task=288
#SBATCH --account=<ACCOUNT>
#SBATCH --output logs/slurm-%x-%j.out

set -x

srun -ul --environment=gemma-pytorch --container-workdir=$PWD bash fine-tune-gemma.sh
srun -ul --environment=./ngc-pytorch-gemma-24.01.toml fine-tune-gemma.sh
```

We set a few Slurm parameters like we already did in the previous tutorial.
Expand All @@ -116,7 +133,7 @@ We'll start out by launching it on two nodes.
It should take about 10-15 minutes to fine-tune Gemma:

```console
$ sbatch --nodes=1 fine-tune-sft.sbatch
[clariden-lnXXX]$ sbatch --nodes=1 submit-fine-tune-gemma.sh
```

### Compare fine-tuned Gemma against default Gemma
Expand All @@ -131,7 +148,7 @@ input_text = "What are the 5 tallest mountains in the Swiss Alps?"
We can run inference using our batch script from the previous tutorial:

```console
$ sbatch ./gemma-inference.sbatch
[clariden-lnXXX]$ sbatch submit-gemma-inference.sh
```

Inspecting the output should yield something like this:
Expand All @@ -152,7 +169,8 @@ the 5 tallest mountains in the Swiss Alps:
Next, we can update the model line in our Python inference script to use the model that we just fine-tuned:

```python
model = AutoModelForCausalLM.from_pretrained("gemma-finetuned-openassistant/checkpoint-400", device_map="auto")
model = AutoModelForCausalLM.from_pretrained(
"gemma-fine-tuned-openassistant/checkpoint-400", device_map="auto")
```

If we re-run inference, the output will be a bit more detailed and explanatory, similar to output we might expect from a helpful chatbot. One example looks like this:
Expand Down
Loading
Loading