Skip to content

Commit 4a3ff1f

Browse files
authored
Refactor program-data separation example (#51)
* Refactor program-data separation example * refactor * refactor
1 parent 45ec092 commit 4a3ff1f

File tree

7 files changed

+106
-76
lines changed

7 files changed

+106
-76
lines changed

program-data-separation/README.md

Lines changed: 15 additions & 74 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,10 @@
11
# Program Data Separation Examples
22

3-
This directory provides an example of the Program Data Separation APIs in ExecuTorch.
3+
This directory provides an example of the Program Data Separation APIs in ExecuTorch. Specifically, it showcases:
4+
1. Program data separation examples using a linear model with the portable operators and XNNPACK.
5+
2. LoRA inference example with a LoRA and non-LoRA model sharing foundation weights.
6+
7+
## Program Data Separation
48

59
The program-data separation APIs allow users to generate a separate data file when exporting and lowering a model. i.e., generate a PTE file containing the model execution program, and one (or more) [PTD](https://github.com/pytorch/executorch/blob/main/extension/flat_tensor/README.md) file/s containing only weights.
610

@@ -9,82 +13,19 @@ PTD files are used to store data outside of the PTE file. Some use-cases:
913
- Deduplication: sharing model weights between multiple executable PTE files. This can significantly reduce binary file size and runtime memory usage.
1014
- Flexible deployment: allow async updates between program and data, especially if they are updated with different cadences.
1115

12-
## LoRA
13-
A major use-case that program-data separation enables is inference with multiple LoRA adapters. LoRA is a fine-tuning technique introduced in [LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685). LoRA fine-tuning produces lightweight 'adapter' weights that can be applied to an existing model to adapt it to a new task. LoRA adapters are typically small in comparison to LLM foundation weights. They are generally on the order of KB,MB, depending on the finetuning setup and model size.
14-
15-
With program-data separation, users can generate a PTE file containing the program and LoRA weights, and save the original foundation weights to a separate PTD file. Provided they are based on the same underlying model, multiple LoRA-adapted PTE files can share the same foundation weights. This means adding a model adapted to a new task incurs minimal binary size and runtime memory overhead; the cost of the lora adapter weights.
16-
17-
An example of this usage is coming soon.
18-
19-
## Virtual environment setup
20-
Create and activate a Python virtual environment:
21-
```bash
22-
python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip
23-
```
24-
Or alternatively, [install conda on your machine](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)
25-
```bash
26-
conda create -yn executorch-ptd python=3.10.0 && conda activate executorch-ptd
27-
```
28-
29-
Install dependencies:
30-
31-
[Please install ExecuTorch pip package from source](https://docs.pytorch.org/executorch/stable/using-executorch-building-from-source.html#install-executorch-pip-package-from-source), until executorch==0.7.0 is released.
32-
33-
```
34-
pip install executorch==0.7.0
35-
```
36-
37-
## Export a model with program-data separation
38-
To export a non-delegated linear model, into the current directory:
39-
```python
40-
python export.py --outdir .
41-
```
42-
Expect the files 'linear.pte' and 'linear.ptd'.
43-
44-
To export a linear model delegated to XNNPACK, into the current directory:
45-
```python
46-
python export.py --outdir . --xnnpack
47-
```
48-
Expect the files 'linear_xnnpack.pte' and 'linear_xnnpack.ptd'.
49-
50-
Note:
51-
- PTE: contains the program execution logic.
52-
- PTD: contains the constant tensors used by the PTE.
53-
5416
For more information on the PTD data format, please see the [flat_tensor](https://github.com/pytorch/executorch/blob/main/extension/flat_tensor/README.md) directory.
5517

56-
## Runtime (cpp)
57-
The cpp/ directory contains the executorch submodule along with a main.cpp file that demonstrates how to load the PTE and PTD files and execute the program.
58-
59-
First, export your PTE and PTD files using the instructions above.
60-
61-
**Build instructions**
62-
63-
Change to the cpp directory.
64-
```
65-
cd cpp
66-
```
67-
68-
Create build directory if it doesn't exist.
69-
```
70-
mkdir -p build
71-
cd build
72-
```
18+
## Linear example
19+
For a demo of the program-data separation APIs using a linear model, please see [program-data-separation/cpp/linear_example](linear_example/). This example generates and runs a program-data separated linear model, with weights and bias in a separate .ptd file.
7320

74-
Configure CMake.
75-
```
76-
cmake -DCMAKE_BUILD_TYPE=Release ..
77-
```
21+
## LoRA example
22+
A major use-case that program-data separation enables is inference with multiple LoRA adapters. LoRA is a fine-tuning technique introduced in [LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685). LoRA fine-tuning produces lightweight 'adapter' weights that can be applied to an existing model to adapt it to a new task. LoRA adapters are typically small in comparison to LLM foundation weights, on the order of KB-MB depending on the finetuning setup and model size.
7823

79-
Build the project.
80-
```
81-
cmake --build . -j$(nproc)
82-
echo "Build complete! Executable located at: ./bin/executorch_program_data_separation"
83-
```
24+
To enable LoRA, we generate:
25+
- PTE file/s: containing program and LoRA adapter weights.
26+
- PTD file: containing foundation weights.
8427

85-
Run the executable.
86-
```
87-
./bin/executorch_program_data_separation --model-path ../../linear.pte --data-path ../../linear.ptd
28+
Multiple LoRA-adapted PTE files can share the same foundation weights and adding a model adapted to a new task incurs minimal binary size and runtime memory overhead.
8829

89-
./bin/executorch_program_data_separation --model-path ../../linear_xnnpack.pte --data-path ../../linear_xnnpack.ptd
90-
```
30+
### Requirements
31+
LoRA is currently supported on executorch main. [Please install ExecuTorch pip package from source](https://docs.pytorch.org/executorch/stable/using-executorch-building-from-source.html#install-executorch-pip-package-from-source), until executorch==1.0 is released.

program-data-separation/cpp/CMakeLists.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ option(EXECUTORCH_BUILD_XNNPACK "" ON)
1717
# Add ExecuTorch subdirectory
1818
add_subdirectory("executorch")
1919

20-
set(DEMO_SOURCES main.cpp)
20+
set(DEMO_SOURCES linear_example/main.cpp)
2121

2222
# Create executable
2323
add_executable(executorch_program_data_separation ${DEMO_SOURCES})
Submodule executorch updated 1095 files
Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
# ExecuTorch Program Data Separation Demo C++.
2+
3+
This directory contains the C++ code to run the examples generated in [program-data-separation](../program-data-separation/README.md).
4+
5+
6+
## Virtual environment setup.
7+
Create and activate a Python virtual environment:
8+
```bash
9+
python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip
10+
```
11+
Or alternatively, [install conda on your machine](https://conda.io/projects/conda/en/latest/user-guide/install/index.html)
12+
```bash
13+
conda create -yn executorch-ptd python=3.10.0 && conda activate executorch-ptd
14+
```
15+
16+
Install dependencies:
17+
```bash
18+
pip install executorch==0.7.0
19+
```
20+
21+
## Export the model/s.
22+
23+
Change into the program-data-separation directory and create a directory to hold exported artifacts.
24+
```bash
25+
cd ~/executorch-examples/program-data-separation
26+
mkdir models
27+
```
28+
29+
Export models into the `models` directory. The first command will generated undelegated model/data files, and the second will generate XNNPACK-delegated model/data files.
30+
```bash
31+
python export_linear.py --outdir models/
32+
python export_linear.py --outdir models/ --xnnpack
33+
```
34+
Expect the files `linear.pte` and `linear.ptd`, `linear_xnnpack.pte` and `linear_xnnpack.ptd`.
35+
36+
Note:
37+
- PTE: contains the program execution logic.
38+
- PTD: contains the constant tensors used by the PTE.
39+
40+
See [program-data-separation](../../program-data-separation/README.md) for instructions.
41+
42+
## Install runtime dependencies.
43+
The ExecuTorch repository is configured as a git submodule at `~/executorch-examples/program-data-separation/cpp/executorch`. To initialize it:
44+
```bash
45+
cd ~/executorch-examples/
46+
git submodule sync
47+
git submodule update --init --recursive
48+
```
49+
Install dev requirements for ExecuTorch
50+
51+
```bash
52+
cd ~/executorch-examples/program-data-separation/cpp/executorch
53+
pip install -r requirements-dev.txt
54+
```
55+
56+
## Build the runtime.
57+
Build the executable:
58+
```bash
59+
cd ~/executorch-examples/program-data-separation/cpp/linear_example
60+
chmod +x build_example.sh
61+
./build_example.sh
62+
```
63+
64+
## Run the executable.
65+
```
66+
./build/bin/executorch_program_data_separation --model-path ../../models/linear.pte --data-path ../../models/linear.ptd
67+
68+
./build/bin/executorch_program_data_separation --model-path ../../models/linear_xnnpack.pte --data-path ../../models/linear_xnnpack.ptd
69+
```
70+
71+
## Clean up.
72+
rm -rf build
73+
cd ~/executorch-examples/program-data-separation
74+
rm -rf models
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
#!/bin/bash
2+
set -e
3+
4+
# Clean and create build directory if it doesn't exist
5+
rm -rf build
6+
mkdir -p build
7+
cd build
8+
9+
# Configure CMake
10+
cmake -DCMAKE_BUILD_TYPE=Release ../..
11+
12+
# Build the project
13+
cmake --build . -j$(nproc)
14+
15+
echo "Build complete! Executable located at: ./build/bin/executorch_program_data_separation"

0 commit comments

Comments
 (0)