Skip to content

Commit e74ba76

Browse files
committed
added colab link; some typos?
1 parent dfbe613 commit e74ba76

File tree

1 file changed

+7
-4
lines changed

1 file changed

+7
-4
lines changed

recipes/finetuning/quickstart_peft_finetuning.ipynb

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,9 @@
66
"metadata": {},
77
"source": [
88
"Copyright (c) Meta Platforms, Inc. and affiliates.\n",
9-
"This software may be used and distributed according to the terms of the Llama 2 Community License Agreement."
9+
"This software may be used and distributed according to the terms of the Llama 2 Community License Agreement.\n",
10+
"\n",
11+
"<a href=\"https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/finetuning/quickstart_peft_finetuning.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
1012
]
1113
},
1214
{
@@ -18,7 +20,7 @@
1820
"\n",
1921
"This notebook shows how to train a Meta Llama 3 model on a single GPU (e.g. A10 with 24GB) using int8 quantization and LoRA finetuning.\n",
2022
"\n",
21-
"**_Note:_** To run this notebook on a machine with less than 24GB VRAM (e.g. T4 with 15GB) the context length of the training dataset needs to be adapted.\n",
23+
"**_Note:_** To run this notebook on a machine with less than 24GB VRAM (e.g. T4 with 16GB) the context length of the training dataset needs to be adapted.\n",
2224
"We do this based on the available VRAM during execution.\n",
2325
"If you run into OOM issues try to further lower the value of train_config.context_length."
2426
]
@@ -38,6 +40,7 @@
3840
"metadata": {},
3941
"outputs": [],
4042
"source": [
43+
"# uncomment if running from Colab T4\n",
4144
"# ! pip install llama-recipes ipywidgets\n",
4245
"\n",
4346
"# import huggingface_hub\n",
@@ -95,7 +98,7 @@
9598
"train_config.lr = 3e-4\n",
9699
"train_config.use_fast_kernels = True\n",
97100
"train_config.use_fp16 = True\n",
98-
"train_config.context_length = 1024 if torch.cuda.get_device_properties(0).total_memory < 16e9 else 2048 # T4 15GB or A10 24GB\n",
101+
"train_config.context_length = 1024 if torch.cuda.get_device_properties(0).total_memory < 16e9 else 2048 # T4 16GB or A10 24GB\n",
99102
"train_config.batching_strategy = \"packing\"\n",
100103
"train_config.output_dir = \"meta-llama-samsum\"\n",
101104
"\n",
@@ -464,7 +467,7 @@
464467
"name": "python",
465468
"nbconvert_exporter": "python",
466469
"pygments_lexer": "ipython3",
467-
"version": "3.11.9"
470+
"version": "3.10.14"
468471
},
469472
"vscode": {
470473
"interpreter": {

0 commit comments

Comments
 (0)