Skip to content

Commit c1dba0f

Browse files
authored
Qualcomm AI Engine Direct - Support simple_eval in calibration, perpl… (#12958)
### Summary - Enable Perplexity Evaluation on device with `llama.py` - Evaluate perplexity after qdq cpu - Enable quantization to use simple_eval as calibration dataset. - Enable UT to check perplexity for QWEN, which should be more reliable than checking the string output. Will have a follow up PR to address: - External CI enablement for qwen on x86 (If it does not take too long). - Hide Logits scale/offset to metadata in model #### Script `python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -s $DEVICE -m SM8750 --prompt "What is 1+1?" --temperature 0 --model_mode kv --max_seq_len 1024 --ptq 16a8w --decoder_model qwen2_5 --eval_perplexity --tasks wikitext` ### Test plan `python backends/qualcomm/tests/test_qnn_delegate.py -k TestExampleLLMScript.test_static_qwen2_5 --model SM8650 --build_folder build-android/ --executorch_root . -s $DEVICE` Author: @shewu-quic, @winskuo-quic
1 parent 6485e4f commit c1dba0f

17 files changed

+900
-295
lines changed

backends/qualcomm/tests/test_qnn_delegate.py

Lines changed: 15 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -4313,7 +4313,7 @@ def test_llama_stories_110m(self):
43134313
if not self.compile_only and not self.enable_x86_64:
43144314
self.assertGreaterEqual(msg["inference_speed"], 220) # Lanai
43154315

4316-
def test_qwen2_5(self):
4316+
def test_static_qwen2_5(self):
43174317
if not self.required_envs():
43184318
self.skipTest("missing required envs")
43194319

@@ -4338,11 +4338,14 @@ def test_qwen2_5(self):
43384338
"--decoder_model",
43394339
"qwen2_5",
43404340
"--model_mode",
4341-
"hybrid",
4342-
"--prefill_ar_len",
4343-
"32",
4341+
"kv",
43444342
"--max_seq_len",
4345-
"128",
4343+
"1024",
4344+
"--eval_perplexity",
4345+
"--tasks",
4346+
"wikitext",
4347+
"--limit",
4348+
"1",
43464349
]
43474350
if self.compile_only:
43484351
cmds.extend(["--compile_only"])
@@ -4355,8 +4358,6 @@ def test_qwen2_5(self):
43554358
if self.pre_gen_pte:
43564359
cmds.extend(["--pre_gen_pte", self.pre_gen_pte])
43574360

4358-
# Accuracy is bad for now. Just check user's prompt is returned.
4359-
golden_start_with = "My favourite condiment is "
43604361
p = subprocess.Popen(cmds, stdout=subprocess.DEVNULL)
43614362
with Listener((self.ip, self.port)) as listener:
43624363
conn = listener.accept()
@@ -4365,12 +4366,13 @@ def test_qwen2_5(self):
43654366
if "Error" in msg:
43664367
self.fail(msg["Error"])
43674368
else:
4368-
model_out = msg["result"][0]
4369-
self.assertTrue(
4370-
model_out.startswith(golden_start_with),
4371-
f"Expected Output: {golden_start_with}. Actual Output: {model_out}",
4372-
)
4373-
self.assertGreaterEqual(msg["inference_speed"], 95) # Lanai
4369+
inference_speed_ref = {"SM8650": 110, "SM8750": 130}
4370+
self.assertLessEqual(msg["wiki_ppl"], 25)
4371+
self.assertLessEqual(msg["pte_size"], 800000000) # 800mb
4372+
if self.model in inference_speed_ref:
4373+
self.assertGreaterEqual(
4374+
msg["inference_speed"], inference_speed_ref[self.model]
4375+
)
43744376

43754377

43764378
class TestExampleOssScript(TestQNN):

examples/qualcomm/oss_scripts/llama/README.md

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -114,11 +114,14 @@ We have two distinct mechanisms for updating the key-value (KV) cache, which can
114114
</table>
115115

116116
### Additional Configs when running the script
117+
118+
#### Compile Only
117119
If you would like to compile the model only, we have provided the flag `--compile_only`. Taking LLAMA3.2 as an example:
118120
```bash
119121
python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -m ${SOC_MODEL} --ptq 16a4w --checkpoint consolidated.00.pth --params params.json --tokenizer_model tokenizer.model --llama_model llama3_2 --model_mode hybrid --prefill_ar_len 32 --max_seq_len 128 --prompt "what is 1+1" --compile_only
120122
```
121123

124+
#### Pre Generated PTE
122125
On the other hand, if you already have a pre-compiled .pte model, you can perform inference by providing the flag `--pre_gen_pte` and specifying the folder that contains the .pte model. Taking LLAMA3.2 as an example:
123126
```bash
124127
python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -s ${SERIAL_NUM} -m ${SOC_MODEL} --ptq 16a4w --checkpoint consolidated.00.pth --params params.json --tokenizer_model tokenizer.model --llama_model llama3_2 --model_mode hybrid --prefill_ar_len 32 --max_seq_len 128 --prompt "what is 1+1" --pre_gen_pte ${FOLDER_TO_PRE_GEN_PTE}
@@ -149,3 +152,28 @@ python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -s ${SERIAL
149152

150153
You can enable MaskedSoftmax feature by providing the flag `--enable_masked_softmax`. It is designed to optimize the LLMs accuracy and performance executed on HTP backend. MaskedSoftmax is used to replace the Softmax(Add(In, Mask)) structure in attention block in LLMs during backend optimization. For more details, please refer to QNN documents.
151154
Note that it is only supported starting from QNN 2.35.
155+
156+
#### Perplexity Evaluation
157+
This script supports perplexity evaluation and is capable of assessing perplexity scores across 3 phases: prepare_pt2e(CPU FP), convert_pt2e(CPU QDQ), QNN on device.
158+
159+
To evaluate the perplexity across all 3 phases, users should provide the `--eval_perplexity` flag and specify the evaluation task. Please notice when this flag is provided, the `--prompt ${PROMPT}` will be ignored.
160+
161+
For example, using the Qwen model and 1 wikitext sample as the evaluation task, users can assess all 3 phases perplexity score in a single run by including the appropriate configuration:
162+
```bash
163+
python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -s ${SERIAL_NUM} -m ${SOC_MODEL} --prompt "What is 1+1?" --temperature 0 --model_mode kv --max_seq_len 1024 --ptq 16a8w --decoder_model qwen2_5 --eval_perplexity --tasks wikitext --limit 1
164+
```
165+
166+
For the example script above, 1 wikitext sample is used to evaluate all 3 phases. However, there are cases where a user may want to use one sample for quantization calibration and multiple samples for perplexity evaluation. In this case, the process should be split into two runs. In the 1st run, the model is compiled using one sample. In the 2nd run, the user can provide a different configuration for QNN device execution.
167+
Example:
168+
```bash
169+
# 1st run to compile with --limit 1
170+
python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -s ${SERIAL_NUM} -m ${SOC_MODEL} --prompt "What is 1+1?" --temperature 0 --model_mode kv --max_seq_len 1024 --ptq 16a8w --decoder_model qwen2_5 --eval_perplexity --tasks wikitext --limit 1 --compile_only
171+
```
172+
```bash
173+
# 2nd run to perform QNN device execution with --limit 3
174+
python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -s ${SERIAL_NUM} -m ${SOC_MODEL} --prompt "What is 1+1?" --temperature 0 --model_mode kv --max_seq_len 1024 --ptq 16a8w --decoder_model qwen2_5 --eval_perplexity --tasks wikitext --limit 3 --pre_gen_pte ${PATH_TO_ARTIFACT_IN_1ST_RUN} --quant_attrs_path ${PATH_TO_ARTIFACT_IN_1ST_RUN}/kv_llama_qnn_quant_attrs.json
175+
```
176+
177+
#### Tasks quantization calibration
178+
If `--tasks ${TASK}` is not provided, the program will use `--prompt ${PROMPT}` as the dataset for quantization calibration.
179+
Regardless of whether `--eval_perplexity` is provided, as long as `--tasks ${TASK}` is specified, the specified tasks will be used for model quantization calibration instead of the prompt.
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# Copyright (c) Qualcomm Innovation Center, Inc.
2+
# All rights reserved
3+
#
4+
# This source code is licensed under the BSD-style license found in the
5+
# LICENSE file in the root directory of this source tree.
6+
7+
HUGGING_FACE_REPO_IDS = {"qwen2_5": "Qwen/Qwen2.5-0.5B"}
8+
9+
EVAL_MODE = {
10+
"kv": 0,
11+
"hybrid": 1,
12+
"lookahead": 2,
13+
}
14+
15+
DECODER_MODEL_VERSION = {
16+
"stories260k": "llama2",
17+
"stories110m": "llama2",
18+
"llama3_2": "llama3",
19+
"qwen2_5": "qwen2_5",
20+
}

0 commit comments

Comments
 (0)