You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: program-data-separation/README.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,5 +27,7 @@ To enable LoRA, we generate:
27
27
28
28
Multiple LoRA-adapted PTE files can share the same foundation weights and adding a model adapted to a new task incurs minimal binary size and runtime memory overhead.
29
29
30
+
Please take a look at [program-data-separation/cpp/lora_example](lora_example/) for a demo of the program-data separation APIs with LoRA. This example generates and runs a LoRA and a non-LoRA model that share foundation weights. At runtime, we see that memory usage does not double.
31
+
30
32
### Requirements
31
33
LoRA is currently supported on executorch main. [Please install ExecuTorch pip package from source](https://docs.pytorch.org/executorch/stable/using-executorch-building-from-source.html#install-executorch-pip-package-from-source), until executorch==1.0 is released.
Copy file name to clipboardExpand all lines: program-data-separation/cpp/lora_example/README.md
+47-5Lines changed: 47 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,20 @@
1
-
# ExecuTorch Program Data Separation Demo C++.
1
+
# ExecuTorch LoRA Demo
2
2
3
-
This directory contains the C++ code to run the examples generated in [program-data-separation](../program-data-separation/README.md).
3
+
This directory contains the C++ code for the LoRA demo. This demo showcases how to export and run models that share the same architecture without inflating binary file size or runtime memory.
4
4
5
+
Specifically, this demo walks through exporting and running a LoRA and non-LoRA llama model without duplication of shared foundation weights on disk or in memory.
6
+
7
+
1. Exporting LoRA and non-LoRA llama models, lowered to XNNPACK, with weights in a separate file.
8
+
2. Loading and running models with weights in a separate file.
9
+
3. Runtime weight sharing via XNNPACK.
10
+
11
+
## Size savings.
12
+
13
+
Size results will vary depending on the model, quantization and LoRA config. For this demo, we save ~5GB of disk space by storing weights in a separate, sharable file and ~5GB runtime memory by sharing weights at runtime through the XNNPACK weight cache. Detailed results are below.
14
+
15
+
### XNNPACK weight sharing.
16
+
17
+
The XNNPACK backend is a singleton. Weight sharing is implemented via the XNNPACK weight cache. At delegate init time, XNNPACK checks the weight cache for the weights it needs. If they don't exist, XNNPACK will fetch weights from the NamedDataMap (the API that exposes weights in a PTD file), pack them, store them in the weight cache and free the original. This means we won't keep around multiple copies of the same weights.
5
18
6
19
## Virtual environment setup.
7
20
Create and activate a Python virtual environment:
@@ -46,7 +59,19 @@ tokenizer.model is copied from the temp directory where we downloaded the HF art
46
59
47
60
Note:
48
61
- PTE: contains the program execution logic.
49
-
- PTD: contains the constant tensors used by the PTE.
62
+
- PTD: contains the constant tensors used by the PTE. This format is similar to safetensors, but relying on flatbuffer instead of json for serde.
63
+
64
+
Sample file sizes:
65
+
```
66
+
-rw-r--r-- 1 lfq users 4943000480 Aug 11 15:55 foundation.ptd
67
+
-rw-r--r-- 1 lfq users 1078636416 Aug 11 15:55 llama_3_2_1B_lora.pte
68
+
-rw-r--r-- 1 lfq users 1051324736 Aug 11 15:53 llama_3_2_1B.pte
69
+
```
70
+
71
+
Notice the lora - llama file size difference is about 27.3MB. This will change depending on the LoRA config. This demo is using the config from https://huggingface.co/lucylq/llama3_1B_lora/blob/main/adapter_config.json
You should see some logs showing the Resident Set Size (RSS) at various points of the execution. Some sample logs may look like this:
111
+
112
+
```
113
+
Generating with llama...
114
+
RSS after loading model: 7886.125000 MiB
115
+
RSS after prompt prefill: 7886.125000 MiB
116
+
RSS after finishing text generation: 7886.125000 MiB
117
+
118
+
Generating with lora...
119
+
RSS after loading model: 7933.523438 MiB
120
+
RSS after prompt prefill: 7933.523438 MiB
121
+
RSS after finishing text generation: 7933.523438 MiB
83
122
```
123
+
Notice the memory increase of ~47 MiB from running llama model to running lora model. You can see the difference without weight-sharing by removing the flag `-DEXECUTORCH_XNNPACK_ENABLE_WEIGHT_CACHE=True` from `build_example.sh`.
0 commit comments