Skip to content

Commit e382a72

Browse files
committed
address comments
1 parent f2f06c1 commit e382a72

File tree

4 files changed

+57
-12
lines changed

4 files changed

+57
-12
lines changed

program-data-separation/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,5 +27,7 @@ To enable LoRA, we generate:
2727

2828
Multiple LoRA-adapted PTE files can share the same foundation weights and adding a model adapted to a new task incurs minimal binary size and runtime memory overhead.
2929

30+
Please take a look at [program-data-separation/cpp/lora_example](lora_example/) for a demo of the program-data separation APIs with LoRA. This example generates and runs a LoRA and a non-LoRA model that share foundation weights. At runtime, we see that memory usage does not double.
31+
3032
### Requirements
3133
LoRA is currently supported on executorch main. [Please install ExecuTorch pip package from source](https://docs.pytorch.org/executorch/stable/using-executorch-building-from-source.html#install-executorch-pip-package-from-source), until executorch==1.0 is released.
Submodule executorch updated 328 files

program-data-separation/cpp/lora_example/README.md

Lines changed: 47 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,20 @@
1-
# ExecuTorch Program Data Separation Demo C++.
1+
# ExecuTorch LoRA Demo
22

3-
This directory contains the C++ code to run the examples generated in [program-data-separation](../program-data-separation/README.md).
3+
This directory contains the C++ code for the LoRA demo. This demo showcases how to export and run models that share the same architecture without inflating binary file size or runtime memory.
44

5+
Specifically, this demo walks through exporting and running a LoRA and non-LoRA llama model without duplication of shared foundation weights on disk or in memory.
6+
7+
1. Exporting LoRA and non-LoRA llama models, lowered to XNNPACK, with weights in a separate file.
8+
2. Loading and running models with weights in a separate file.
9+
3. Runtime weight sharing via XNNPACK.
10+
11+
## Size savings.
12+
13+
Size results will vary depending on the model, quantization and LoRA config. For this demo, we save ~5GB of disk space by storing weights in a separate, sharable file and ~5GB runtime memory by sharing weights at runtime through the XNNPACK weight cache. Detailed results are below.
14+
15+
### XNNPACK weight sharing.
16+
17+
The XNNPACK backend is a singleton. Weight sharing is implemented via the XNNPACK weight cache. At delegate init time, XNNPACK checks the weight cache for the weights it needs. If they don't exist, XNNPACK will fetch weights from the NamedDataMap (the API that exposes weights in a PTD file), pack them, store them in the weight cache and free the original. This means we won't keep around multiple copies of the same weights.
518

619
## Virtual environment setup.
720
Create and activate a Python virtual environment:
@@ -46,7 +59,19 @@ tokenizer.model is copied from the temp directory where we downloaded the HF art
4659

4760
Note:
4861
- PTE: contains the program execution logic.
49-
- PTD: contains the constant tensors used by the PTE.
62+
- PTD: contains the constant tensors used by the PTE. This format is similar to safetensors, but relying on flatbuffer instead of json for serde.
63+
64+
Sample file sizes:
65+
```
66+
-rw-r--r-- 1 lfq users 4943000480 Aug 11 15:55 foundation.ptd
67+
-rw-r--r-- 1 lfq users 1078636416 Aug 11 15:55 llama_3_2_1B_lora.pte
68+
-rw-r--r-- 1 lfq users 1051324736 Aug 11 15:53 llama_3_2_1B.pte
69+
```
70+
71+
Notice the lora - llama file size difference is about 27.3MB. This will change depending on the LoRA config. This demo is using the config from https://huggingface.co/lucylq/llama3_1B_lora/blob/main/adapter_config.json
72+
```
73+
{"r": 64, "lora_alpha": 128, "target_modules": ["q_proj", "v_proj", "o_proj"], "peft_type": "LORA", "base_model_name_or_path": "meta-llama/Llama-3.2-1B-Instruct"}
74+
```
5075

5176
## Install runtime dependencies.
5277
The ExecuTorch repository is configured as a git submodule at `~/executorch-examples/program-data-separation/cpp/executorch`. To initialize it:
@@ -55,7 +80,7 @@ cd ~/executorch-examples/
5580
git submodule sync
5681
git submodule update --init --recursive
5782
```
58-
Install dev requirements for ExecuTorch
83+
Install dev requirements for ExecuTorch:
5984

6085
```bash
6186
cd ~/executorch-examples/program-data-separation/cpp/executorch
@@ -79,10 +104,27 @@ sh build_example.sh
79104
```bash
80105
cd ~/executorch-examples/program-data-separation/cpp/lora_example
81106

82-
./build/bin/executorch_program_data_separation --lora_model_path=../../llama_3_2_1B_lora.pte --llama_model_path=../../llama_3_2_1B.pte --tokenizer_path=../../tokenizer.model --data_path=../../foundation.ptd
107+
./build/bin/executorch_program_data_separation --lora_model_path=../../llama_3_2_1B_lora.pte --llama_model_path=../../llama_3_2_1B.pte --tokenizer_path=../../tokenizer.model --foundation_weights_path=../../foundation.ptd
108+
```
109+
110+
You should see some logs showing the Resident Set Size (RSS) at various points of the execution. Some sample logs may look like this:
111+
112+
```
113+
Generating with llama...
114+
RSS after loading model: 7886.125000 MiB
115+
RSS after prompt prefill: 7886.125000 MiB
116+
RSS after finishing text generation: 7886.125000 MiB
117+
118+
Generating with lora...
119+
RSS after loading model: 7933.523438 MiB
120+
RSS after prompt prefill: 7933.523438 MiB
121+
RSS after finishing text generation: 7933.523438 MiB
83122
```
123+
Notice the memory increase of ~47 MiB from running llama model to running lora model. You can see the difference without weight-sharing by removing the flag `-DEXECUTORCH_XNNPACK_ENABLE_WEIGHT_CACHE=True` from `build_example.sh`.
84124

85125
## Clean up.
126+
```bash
86127
rm -rf build
87128
cd ~/executorch-examples/program-data-separation
88129
rm -rf *.pte *.ptd tokenizer.model
130+
```

program-data-separation/cpp/lora_example/main.cpp

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -28,8 +28,8 @@ DEFINE_string(lora_model_path, "llama_3_2_1B_lora.pte",
2828
"LoRA model serialized in flatbuffer format.");
2929
DEFINE_string(llama_model_path, "llama_3_2_1B.pte",
3030
"Model serialized in flatbuffer format.");
31-
DEFINE_string(data_path, "foundation.ptd",
32-
"Data serialized in flatbuffer format.");
31+
DEFINE_string(foundation_weights_path, "foundation.ptd",
32+
"Foundation weights serialized in flatbuffer format.");
3333

3434
DEFINE_string(tokenizer_path, "tokenizer.model", "Tokenizer stuff.");
3535

@@ -77,7 +77,7 @@ int main(int argc, char *argv[]) {
7777

7878
const char *lora_model_path = FLAGS_lora_model_path.c_str();
7979
const char *llama_model_path = FLAGS_llama_model_path.c_str();
80-
const char *data_path = FLAGS_data_path.c_str();
80+
const char *foundation_weights_path = FLAGS_foundation_weights_path.c_str();
8181

8282
const char *tokenizer_path = FLAGS_tokenizer_path.c_str();
8383
const char *prompt = FLAGS_prompt.c_str();
@@ -102,9 +102,10 @@ int main(int argc, char *argv[]) {
102102
// Create runners.
103103
std::unique_ptr<llm::TextLLMRunner> llama_runner =
104104
llm::create_text_llm_runner(llama_model_path, std::move(tokenizer1),
105-
data_path, temperature);
106-
std::unique_ptr<llm::TextLLMRunner> lora_runner = llm::create_text_llm_runner(
107-
lora_model_path, std::move(tokenizer2), data_path, temperature);
105+
foundation_weights_path, temperature);
106+
std::unique_ptr<llm::TextLLMRunner> lora_runner =
107+
llm::create_text_llm_runner(lora_model_path, std::move(tokenizer2),
108+
foundation_weights_path, temperature);
108109

109110
// Generate.
110111
llm::GenerationConfig config{.seq_len = seq_len, .temperature = temperature};

0 commit comments

Comments
 (0)