Skip to content

Commit 6dfad68

Browse files
Chester Hupytorchbot
authored andcommitted
Document update (#5692)
Summary: Pull Request resolved: #5692 1. All caps for xnnpack. 2. Provide command to rename tokenizer file. 3. Other format fixes. Reviewed By: kirklandsign Differential Revision: D63477936 fbshipit-source-id: 9dd63d132f0b811fa9bb6ca7b616aa56fb503830 (cherry picked from commit ff6607e)
1 parent 8d16c52 commit 6dfad68

File tree

2 files changed

+43
-11
lines changed

2 files changed

+43
-11
lines changed

examples/demo-apps/android/LlamaDemo/docs/delegates/qualcomm_README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -71,8 +71,8 @@ cmake --build cmake-out -j16 --target install --config Release
7171

7272

7373

74-
### Setup Llama Runner
75-
Next we need to build and compile the Llama runner. This is similar to the requirements for running Llama with XNNPack.
74+
### Setup Llama Runner
75+
Next we need to build and compile the Llama runner. This is similar to the requirements for running Llama with XNNPACK.
7676
```
7777
sh examples/models/llama2/install_requirements.sh
7878
@@ -130,9 +130,9 @@ You may also wonder what the "--metadata" flag is doing. This flag helps export
130130

131131
Convert tokenizer for Llama 2
132132
```
133-
python -m extension.llm.tokenizer.tokenizer -t <tokenizer.model> -o tokenizer.bin
133+
python -m extension.llm.tokenizer.tokenizer -t tokenizer.model -o tokenizer.bin
134134
```
135-
Convert tokenizer for Llama 3 - Rename tokenizer.model to tokenizer.bin.
135+
Rename tokenizer for Llama 3 with command: `mv tokenizer.model tokenizer.bin`. We are updating the demo app to support tokenizer in original format directly.
136136

137137

138138
### Export with Spinquant (Llama 3 8B only)

examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md

Lines changed: 39 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,10 @@
1-
# Building ExecuTorch Android Demo App for Llama running XNNPack
1+
# Building ExecuTorch Android Demo App for Llama/Llava running XNNPACK
22

3-
This tutorial covers the end to end workflow for building an android demo app using CPU on device via XNNPack framework.
3+
**[UPDATE - 09/25]** We have added support for running [Llama 3.2 models](#for-llama-32-1b-and-3b-models) on the XNNPACK backend. We currently support inference on their original data type (BFloat16). We have also added instructions to run [Llama Guard 1B models](#for-llama-guard-1b-models) on-device.
4+
5+
This tutorial covers the end to end workflow for building an android demo app using CPU on device via XNNPACK framework.
46
More specifically, it covers:
5-
1. Export and quantization of Llama and Llava models against the XNNPack backend.
7+
1. Export and quantization of Llama and Llava models against the XNNPACK backend.
68
2. Building and linking libraries that are required to inference on-device for Android platform.
79
3. Building the Android demo app itself.
810

@@ -56,8 +58,38 @@ Optional: Use the --pybind flag to install with pybindings.
5658
## Prepare Models
5759
In this demo app, we support text-only inference with up-to-date Llama models and image reasoning inference with LLaVA 1.5.
5860

59-
### For Llama model
60-
* You can download original model weights for Llama through Meta official [website](https://llama.meta.com/), or via Huggingface ([Llama 3.1 8B Instruction](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct))
61+
### For Llama 3.2 1B and 3B models
62+
We have supported BFloat16 as a data type on the XNNPACK backend for Llama 3.2 1B/3B models.
63+
* You can request and download model weights for Llama through Meta official [website](https://llama.meta.com/).
64+
* For chat use-cases, download the instruct models instead of pretrained.
65+
* Run `examples/models/llama2/install_requirements.sh` to install dependencies.
66+
* The 1B model in BFloat16 format can run on mobile devices with 8GB RAM. The 3B model will require 12GB+ RAM.
67+
* Export Llama model and generate .pte file as below:
68+
69+
```
70+
python -m examples.models.llama2.export_llama --checkpoint <checkpoint.pth> --params <params.json> -kv -X -d bf16 --metadata '{"get_bos_id":128000, "get_eos_ids":[128009, 128001]}' --output_name="llama3_2.pte"
71+
```
72+
73+
* Rename tokenizer for Llama 3.2 with command: `mv tokenizer.model tokenizer.bin`. We are updating the demo app to support tokenizer in original format directly.
74+
75+
For more detail using Llama 3.2 lightweight models including prompt template, please go to our official [website](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_2#-llama-3.2-lightweight-models-(1b/3b)-).
76+
77+
78+
### For Llama Guard 1B models
79+
To safeguard your application, you can use our Llama Guard models for prompt classification or response classification as mentioned [here](https://www.llama.com/docs/model-cards-and-prompt-formats/llama-guard-3/).
80+
* Llama Guard 3-1B is a fine-tuned Llama-3.2-1B pretrained model for content safety classification. It is aligned to safeguard against the [MLCommons standardized hazards taxonomy](https://arxiv.org/abs/2404.12241).
81+
* You can download the latest Llama Guard 1B INT4 model, which is already exported for ExecuTorch, using instructions from [here](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard3). This model is pruned and quantized to 4-bit weights using 8da4w mode and reduced the size to <450MB to optimize deployment on edge devices.
82+
* You can use the same tokenizer from Llama 3.2.
83+
* To try this model, choose Model Type as LLAMA_GUARD_3 in the demo app below and try prompt classification for a given user prompt.
84+
* We prepared this model using the following command
85+
86+
```
87+
python -m examples.models.llama2.export_llama --checkpoint <pruned llama guard 1b checkpoint.pth> --params <params.json> -d fp32 -kv --use_sdpa_with_kv_cache --quantization_mode 8da4w --group_size 256 --xnnpack --max_seq_length 8193 --embedding-quantize 4,32 --metadata '{"get_bos_id":128000, "get_eos_ids":[128009, 128001]}' --output_prune_map <llama_guard pruned layers map.json> --output_name="llama_guard_3_1b_pruned_xnnpack.pte"
88+
```
89+
90+
91+
### For Llama 3.1 and Llama 2 models
92+
* You can download original model weights for Llama through Meta official [website](https://llama.meta.com/).
6193
* For Llama 2 models, Edit params.json file. Replace "vocab_size": -1 with "vocab_size": 32000. This is a short-term workaround
6294
* Run `examples/models/llama2/install_requirements.sh` to install dependencies.
6395
* Export Llama model and generate .pte file
@@ -70,9 +102,9 @@ You may wonder what the ‘--metadata’ flag is doing. This flag helps export t
70102

71103
* Convert tokenizer for Llama 2
72104
```
73-
python -m extension.llm.tokenizer.tokenizer -t <tokenizer.model> -o tokenizer.bin
105+
python -m extension.llm.tokenizer.tokenizer -t tokenizer.model -o tokenizer.bin
74106
```
75-
* Convert tokenizer for Llama 3 - Rename `tokenizer.model` to `tokenizer.bin`.
107+
* Rename tokenizer for Llama 3.1 with command: `mv tokenizer.model tokenizer.bin`. We are updating the demo app to support tokenizer in original format directly.
76108

77109
### For LLaVA model
78110
* For the Llava 1.5 model, you can get it from Huggingface [here](https://huggingface.co/llava-hf/llava-1.5-7b-hf).

0 commit comments

Comments
 (0)