You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: program-data-separation/README.md
+1-4Lines changed: 1 addition & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Program Data Separation Examples
2
2
3
-
This directory provides an example of the Program Data Separation APIs in ExecuTorch. Specifically, it showcases:
3
+
This directory provides an example of the Program Data Separation APIs in ExecuTorch.
4
4
1. Program data separation examples using a linear model with the portable operators and XNNPACK.
5
5
2. LoRA inference example with a LoRA and non-LoRA model sharing foundation weights.
6
6
@@ -28,6 +28,3 @@ To enable LoRA, we generate:
28
28
Multiple LoRA-adapted PTE files can share the same foundation weights and adding a model adapted to a new task incurs minimal binary size and runtime memory overhead.
29
29
30
30
Please take a look at [program-data-separation/cpp/lora_example](lora_example/) for a demo of the program-data separation APIs with LoRA. This example generates and runs a LoRA and a non-LoRA model that share foundation weights. At runtime, we see that memory usage does not double.
31
-
32
-
### Requirements
33
-
LoRA is currently supported on executorch main. [Please install ExecuTorch pip package from source](https://docs.pytorch.org/executorch/stable/using-executorch-building-from-source.html#install-executorch-pip-package-from-source), until executorch==1.0 is released.
Copy file name to clipboardExpand all lines: program-data-separation/cpp/lora_example/README.md
+47-28Lines changed: 47 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,16 +1,19 @@
1
1
# ExecuTorch LoRA Demo
2
2
3
-
This directory contains the C++ code for the LoRA demo. This demo showcases how to export and run models that share the same architecture without inflating binary file size or runtime memory.
3
+
This directory contains the C++ code for the LoRA demo.
4
4
5
-
Specifically, this demo walks through exporting and running a LoRA and non-LoRA llama model without duplication of shared foundation weights on disk or in memory.
5
+
You'll learn how to:
6
+
1. Export two LoRA PTE files that share a single foundation weight file.
7
+
2. Load and run the LoRA PTE files, and notice that the runtime memory is not doubled as the foundation weights are shared.
6
8
7
-
1. Exporting LoRA and non-LoRA llama models, lowered to XNNPACK, with weights in a separate file.
8
-
2. Loading and running models with weights in a separate file.
9
-
3. Runtime weight sharing via XNNPACK.
9
+
Note:
10
+
- Weight-sharing is supported with the XNNPACK backend.
11
+
- Quantization (outside of embedding quantization) is not supported when weight-sharing.
12
+
- There are many ways to fine-tune LoRA adapters. We will go through a few examples to create a demo.
10
13
11
14
## Size savings.
12
15
13
-
Size results will vary depending on the model, quantization and LoRA config. For this demo, we save ~5GB of disk space by storing weights in a separate, sharable file and ~5GB runtime memory by sharing weights at runtime through the XNNPACK weight cache. Detailed results are below.
16
+
Size results will vary depending on the model and LoRA config. For this demo, we save ~5GB of disk space by storing weights in a separate, sharable file and ~5GB runtime memory by sharing weights at runtime through the XNNPACK weight cache. Detailed results are below.
14
17
15
18
### XNNPACK weight sharing.
16
19
@@ -26,24 +29,32 @@ Or alternatively, [install conda on your machine](https://conda.io/projects/cond
Or [install from source](https://docs.pytorch.org/executorch/stable/using-executorch-building-from-source.html#install-executorch-pip-package-from-source).
45
+
46
+
39
47
## Export the model/s.
40
48
Change into the program-data-separation directory and create a directory to hold exported artifacts.
41
49
```bash
42
50
cd~/executorch-examples/program-data-separation
43
51
mkdir models
44
52
```
45
53
46
-
Export models into the `models` directory. The first command will generated undelegated model/data files, and the second will generate XNNPACK-delegated model/data files.
54
+
Export models into the `models` directory.
55
+
- The first command generates a regular llama_3_2_1B model.
56
+
- The second command generates a llama_3_2_1B lora model.
57
+
47
58
```bash
48
59
sh export_lora.sh
49
60
```
@@ -55,20 +66,20 @@ Expect the files:
55
66
- tokenizer.model
56
67
57
68
llama_3_2_1B.ptd and foundation_weights.ptd contain the same contents, and you can remove llama_3_2_1B.ptd.
58
-
tokenizer.model is copied from the temp directory where we downloaded the HF artifacts. It will be used at runtime.
69
+
tokenizer.model is copied from the temp directory where we downloaded the HF artifacts. It is used at runtime.
59
70
60
71
Note:
61
72
- PTE: contains the program execution logic.
62
-
- PTD: contains the constant tensors used by the PTE. This format is similar to safetensors, but relying on flatbuffer instead of json for serde.
73
+
- PTD: contains the constant tensors used by the PTE. This format is similar to safetensors. It relies on flatbuffers instead of json for serde.
63
74
64
75
Sample file sizes:
65
76
```
66
-
-rw-r--r-- 1 lfq users 4943000480 Aug 11 15:55 foundation.ptd
67
-
-rw-r--r-- 1 lfq users 1078636416 Aug 11 15:55 llama_3_2_1B_lora.pte
68
-
-rw-r--r-- 1 lfq users 1051324736 Aug 11 15:53 llama_3_2_1B.pte
77
+
-rw-r--r-- 1 lfq users 5994013600 Oct 17 14:31 foundation.ptd
78
+
-rw-r--r-- 1 lfq users 27628928 Oct 17 14:31 llama_3_2_1B_lora.pte
79
+
-rw-r--r-- 1 lfq users 317248 Oct 17 14:28 llama_3_2_1B.pte
69
80
```
70
81
71
-
Notice the lora - llama file size difference is about 27.3MB. This will change depending on the LoRA config. This demo is using the config from https://huggingface.co/lucylq/llama3_1B_lora/blob/main/adapter_config.json
82
+
Notice the lora - llama file size difference is about 27.3MB. This is the size of the adapter weights, and changes depending on the LoRA config. This demo is using the config from https://huggingface.co/lucylq/llama3_1B_lora/blob/main/adapter_config.json.
You should see some logs showing the Resident Set Size (RSS) at various points of the execution. Some sample logs may look like this:
111
126
112
127
```
113
-
Generating with llama...
114
-
RSS after loading model: 7886.125000 MiB
115
-
RSS after prompt prefill: 7886.125000 MiB
116
-
RSS after finishing text generation: 7886.125000 MiB
128
+
Generating with model <model file path>
129
+
RSS after loading model: 6909.328125 MiB
130
+
RSS after prompt prefill: 6909.328125 MiB
131
+
RSS after finishing text generation: 6909.328125 MiB
117
132
118
133
Generating with lora...
119
-
RSS after loading model: 7933.523438 MiB
120
-
RSS after prompt prefill: 7933.523438 MiB
121
-
RSS after finishing text generation: 7933.523438 MiB
134
+
RSS after loading model: 7941.667969 MiB
135
+
RSS after prompt prefill: 7941.667969 MiB
136
+
RSS after finishing text generation: 7941.667969 MiB
122
137
```
123
-
Notice the memory increase of ~47 MiB from running llama model to running lora model. You can see the difference without weight-sharing by removing the flag `-DEXECUTORCH_XNNPACK_ENABLE_WEIGHT_CACHE=True` from `build_example.sh`.
138
+
There is about ~1.4GB memory increase between running the two models.
139
+
~1GB comes from embeddings that are not lowered to XNNPACK (and currently are not shared). This can be alleviated by quantizing the embeddings by adding the config `quantization.embedding_quantize=\'4,32\'` to the export command.
140
+
~40MB comes from running the non-lora model, to running the lora model.
141
+
142
+
You can see the difference without weight-sharing by removing the flag `-DEXECUTORCH_XNNPACK_ENABLE_WEIGHT_CACHE=True` from `build_example.sh`. Expect to see almost double the memory usage, ie. ~14-15GB instead of ~8GB.
0 commit comments