You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: program-data-separation/cpp/lora_example/README.md
+93-58Lines changed: 93 additions & 58 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
This directory contains the C++ code for the LoRA demo.
4
4
5
5
You'll learn how to:
6
-
1. Export two LoRA PTE files that share a single foundation weight file.
6
+
1. Export LoRA PTE files that share a single foundation weight file.
7
7
2. Load and run the LoRA PTE files, and notice that the runtime memory is not doubled as the foundation weights are shared.
8
8
9
9
Note:
@@ -13,12 +13,43 @@ Note:
13
13
14
14
## Size savings.
15
15
16
+
Size results will vary depending on the model and LoRA config. For this demo, we save ~5GB of disk space by storing weights in a separate, sharable file and ~5GB runtime memory by sharing weights at runtime through the XNNPACK weight cache. Detailed results are below.
16
17
Size results will vary depending on the model and LoRA config. For this demo, we save ~5GB of disk space by storing weights in a separate, sharable file and ~5GB runtime memory by sharing weights at runtime through the XNNPACK weight cache. Detailed results are below.
17
18
18
19
### XNNPACK weight sharing.
19
20
20
21
The XNNPACK backend is a singleton. Weight sharing is implemented via the XNNPACK weight cache. At delegate init time, XNNPACK checks the weight cache for the weights it needs. If they don't exist, XNNPACK will fetch weights from the NamedDataMap (the API that exposes weights in a PTD file), pack them, store them in the weight cache and free the original. This means we won't keep around multiple copies of the same weights.
21
22
23
+
## [Quick Start](quick_start.md)
24
+
Download pre-trained dummy adapter to export and run along with a regular Llama-3-2-1B model.
25
+
26
+
## Fine-tune from scratch with Unsloth and Llama-3-2-1B.
27
+
We can use [Unsloth](https://unsloth.ai/), a popular tool to finetune and train LLMs, to create our LoRA adapters. Unsloth provides a [colab notebook](https://docs.unsloth.ai/get-started/fine-tuning-llms-guide/datasets-guide#synthetic-dataset-notebook) that showcases how to generate data using the Meta Synthetic Data Kit.
28
+
29
+
The training notebook takes a few shortcuts to reduce the latency/compute. You can change these settings for better results.
30
+
1. Play around with the chunk sizes and overlap to see what works best for your dataset.
31
+
2. The notebook trains on the last three data files generated; increase this for better coverage of your dataset.
32
+
3. At the training step, the notebook uses max_steps=60 to speed things up. Setting num_train_epochs=1 (or greater) for a full run and max_steps=None has better results.
33
+
34
+
For this demo, we trained on two datasets:
35
+
1. executorch/docs/source: an adapter with domain knowledge of executorch. Using Meta Synthetic Data Kit, you can generate qa pairs based on the executorch documentation.
36
+
2. Recent Nobel prize winners (2024-2025): an adapter with knowledge beyond the cutoff date of Llama-3-2-1B. This data was taken from [Wikipedia](https://en.wikipedia.org/wiki/List_of_Nobel_laureates).
37
+
38
+
Unsloth will output the adapter artifacts to the specified directory (in the colab notebook, 'lora_model/'). You will see a few files like such:
39
+
```
40
+
-rw-r--r-- 1 lfq users 1092 Oct 15 11:01 adapter_config.json
41
+
-rw-r--r-- 1 lfq users 45118424 Oct 15 11:01 adapter_model.safetensors
42
+
-rw-r--r-- 1 lfq users 3827 Oct 15 11:01 chat_template.jinja
43
+
-rw-r--r-- 1 lfq users 5268 Oct 15 11:01 README.md
44
+
-rw-r--r-- 1 lfq users 454 Oct 15 11:01 special_tokens_map.json
45
+
-rw-r--r-- 1 lfq users 50642 Oct 15 11:01 tokenizer_config.json
46
+
-rw-r--r-- 1 lfq users 17209920 Oct 15 11:01 tokenizer.json
Or [install from source](https://docs.pytorch.org/executorch/stable/using-executorch-building-from-source.html#install-executorch-pip-package-from-source).
45
71
72
+
```
73
+
# Clone the ExecuTorch repo from GitHub.
74
+
git clone https://github.com/pytorch/executorch.git && cd executorch
46
75
47
-
## Export the model/s.
48
-
Change into the program-data-separation directory and create a directory to hold exported artifacts.
49
-
```bash
50
-
cd~/executorch-examples/program-data-separation
51
-
mkdir models
76
+
# Install ExecuTorch pip package.
77
+
./install_executorch.sh --editable
52
78
```
53
79
54
-
Export models into the `models` directory.
55
-
- The first command generates a regular llama_3_2_1B model.
56
-
- The second command generates a llama_3_2_1B lora model.
80
+
NOTE: some features are not available in executorch==1.0.0, use main or a recent nightly.
57
81
58
-
```bash
59
-
sh export_lora.sh
82
+
## Download base model
83
+
We're using https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct.
60
84
```
61
-
Expect the files:
62
-
- llama_3_2_1B.pte
63
-
- llama_3_2_1B.ptd
64
-
- llama_3_2_1B_lora.pte
65
-
- foundation_weights.ptd
66
-
- tokenizer.model
85
+
pip install huggingface_hub
67
86
68
-
llama_3_2_1B.ptd and foundation_weights.ptd contain the same contents, and you can remove llama_3_2_1B.ptd.
69
-
tokenizer.model is copied from the temp directory where we downloaded the HF artifacts. It is used at runtime.
- PTD: contains the constant tensors used by the PTE. This format is similar to safetensors. It relies on flatbuffers instead of json for serde.
92
+
## Export the adapter models.
74
93
75
-
Sample file sizes:
94
+
Set your paths and the model name.
76
95
```
77
-
-rw-r--r-- 1 lfq users 5994013600 Oct 17 14:31 foundation.ptd
78
-
-rw-r--r-- 1 lfq users 27628928 Oct 17 14:31 llama_3_2_1B_lora.pte
79
-
-rw-r--r-- 1 lfq users 317248 Oct 17 14:28 llama_3_2_1B.pte
96
+
DOWNLOADED_PATH=Llama-3.2-1B-Instruct
97
+
ADAPTER_PATH=lora_model
98
+
MODEL_NAME=<model_name>
80
99
```
81
100
82
-
Notice the lora - llama file size difference is about 27.3MB. This is the size of the adapter weights, and changes depending on the LoRA config. This demo is using the config from https://huggingface.co/lucylq/llama3_1B_lora/blob/main/adapter_config.json.
101
+
Export command. Run this with different MODEL_NAMEs for each adapter.
-rw-r--r-- 1 lfq users 45555712 Oct 17 18:05 et.pte
122
+
-rw-r--r-- 1 lfq users 5994013600 Oct 17 18:05 foundation.ptd
123
+
-rw-r--r-- 1 lfq users 45555712 Oct 17 18:00 nobel.pte
85
124
```
86
125
126
+
The `foundation.ptd` file should be the same regardless of the adapter.
127
+
Notice the adapter PTE files about the size of the adapter_model.safetensors file generated during training. The PTE contains the adapter weights (which are not shared) and the program.
128
+
87
129
## Install runtime dependencies.
88
130
The ExecuTorch repository is configured as a git submodule at `~/executorch-examples/program-data-separation/cpp/executorch`. To initialize it:
--prompt="Who were the winners of the Nobel Prize in Physics in 2025?" \
167
+
--apply_chat_template
123
168
```
169
+
Set `apply_chat_template` to true as this was trained as a chatbot.
124
170
125
-
You should see some logs showing the Resident Set Size (RSS) at various points of the execution. Some sample logs may look like this:
126
-
127
-
```
128
-
Generating with model <model file path>
129
-
RSS after loading model: 6909.328125 MiB
130
-
RSS after prompt prefill: 6909.328125 MiB
131
-
RSS after finishing text generation: 6909.328125 MiB
171
+
Sample output:
132
172
133
-
Generating with lora...
134
-
RSS after loading model: 7941.667969 MiB
135
-
RSS after prompt prefill: 7941.667969 MiB
136
-
RSS after finishing text generation: 7941.667969 MiB
137
-
```
138
-
There is about ~1.4GB memory increase between running the two models.
139
-
~1GB comes from embeddings that are not lowered to XNNPACK (and currently are not shared). This can be alleviated by quantizing the embeddings by adding the config `quantization.embedding_quantize=\'4,32\'` to the export command.
140
-
~40MB comes from running the non-lora model, to running the lora model.
141
173
142
-
You can see the difference without weight-sharing by removing the flag `-DEXECUTORCH_XNNPACK_ENABLE_WEIGHT_CACHE=True` from `build_example.sh`. Expect to see almost double the memory usage, ie. ~14-15GB instead of ~8GB.
Or [install from source](https://docs.pytorch.org/executorch/stable/using-executorch-building-from-source.html#install-executorch-pip-package-from-source).
28
+
29
+
30
+
## Export the model/s.
31
+
Change into the program-data-separation directory and create a directory to hold exported artifacts.
32
+
```bash
33
+
cd~/executorch-examples/program-data-separation
34
+
mkdir models
35
+
```
36
+
37
+
Export models into the `models` directory.
38
+
- The first command generates a regular llama_3_2_1B model.
39
+
- The second command generates a llama_3_2_1B lora model.
40
+
41
+
```bash
42
+
sh export_lora.sh
43
+
```
44
+
Expect the files:
45
+
- llama_3_2_1B.pte
46
+
- llama_3_2_1B.ptd
47
+
- llama_3_2_1B_lora.pte
48
+
- foundation_weights.ptd
49
+
- tokenizer.model
50
+
51
+
llama_3_2_1B.ptd and foundation_weights.ptd contain the same contents, and you can remove llama_3_2_1B.ptd.
52
+
tokenizer.model is copied from the temp directory where we downloaded the HF artifacts. It is used at runtime.
53
+
54
+
Note:
55
+
- PTE: contains the program execution logic.
56
+
- PTD: contains the constant tensors used by the PTE. This format is similar to safetensors. It relies on flatbuffers instead of json for serde.
57
+
58
+
Sample file sizes:
59
+
```
60
+
-rw-r--r-- 1 lfq users 5994013600 Oct 17 14:31 foundation.ptd
61
+
-rw-r--r-- 1 lfq users 27628928 Oct 17 14:31 llama_3_2_1B_lora.pte
62
+
-rw-r--r-- 1 lfq users 317248 Oct 17 14:28 llama_3_2_1B.pte
63
+
```
64
+
65
+
Notice the lora - llama file size difference is about 27.3MB. This is the size of the adapter weights, and changes depending on the LoRA config. This demo is using the config from https://huggingface.co/lucylq/llama3_1B_lora/blob/main/adapter_config.json.
You should see some logs showing the Resident Set Size (RSS) at various points of the execution. Some sample logs may look like this:
109
+
110
+
```
111
+
Generating with model <model file path>
112
+
RSS after loading model: 6909.328125 MiB
113
+
RSS after prompt prefill: 6909.328125 MiB
114
+
RSS after finishing text generation: 6909.328125 MiB
115
+
116
+
Generating with model <model file path>...
117
+
RSS after loading model: 7941.667969 MiB
118
+
RSS after prompt prefill: 7941.667969 MiB
119
+
RSS after finishing text generation: 7941.667969 MiB
120
+
```
121
+
There is about ~1.4GB memory increase between running the two models.
122
+
~1GB comes from embeddings that are not lowered to XNNPACK (and currently are not shared). This can be alleviated by quantizing the embeddings by adding the config `quantization.embedding_quantize=\'4,32\'` to the export command.
123
+
~40MB comes from running the non-lora model, to running the lora model.
124
+
125
+
You can see the difference without weight-sharing by removing the flag `-DEXECUTORCH_XNNPACK_ENABLE_WEIGHT_CACHE=True` from `build_example.sh`. Expect to see almost double the memory usage, ie. ~14-15GB instead of ~8GB.
0 commit comments