Skip to content

Commit 80e9120

Browse files
henriquemsimberg
andauthored
Update docs/guides/mlp_tutorials/llm-finetuning.md
Co-authored-by: Mikael Simberg <[email protected]>
1 parent eaa0f8c commit 80e9120

File tree

1 file changed

+5
-6
lines changed

1 file changed

+5
-6
lines changed

docs/guides/mlp_tutorials/llm-finetuning.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -14,12 +14,11 @@ This tutorial assumes you've already successfully completed the [LLM Inference][
1414

1515
We will use HuggingFace TRL to fine-tune Gemma-7B on the [OpenAssistant dataset](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25). First, we need to update our Python environment with some extra libraries to support TRL. To do this, we can launch an interactive shell in the PyTorch container, just like we did in the previous tutorial. Then, we install `peft`:
1616

17-
```
18-
[cluster][user@cluster-ln001 gemma-inference]$ cd $SCRATCH/gemma-inference
19-
[cluster][user@cluster-ln001 gemma-inference]$ srun --environment=gemma-pytorch --container-workdir=$PWD --pty bash
20-
user@nid001234:/bret/scratch/cscs/user/gemma-inference$ source ./gemma-venv/bin/activate
21-
(gemma-venv) user@nid001234:/bret/scratch/cscs/user/gemma-inference$ python -m pip install peft==0.11.1
22-
# ... pip output ...
17+
```bash
18+
cd $SCRATCH/gemma-inference
19+
srun --environment=gemma-pytorch --container-workdir=$PWD --pty bash
20+
source ./gemma-venv/bin/activate
21+
python -m pip install peft==0.11.1
2322
```
2423

2524
Next, we also need to clone and install the `trl` Git repository so that we have access to the fine-tuning scripts in it. For this purpose, we will install the package in editable mode in the virtual environment. This makes it available in python scripts independent of the current working directory and without creating a redundant copy of the files.

0 commit comments

Comments
 (0)