Skip to content

Commit 194cabb

Browse files
digantdesaifacebook-github-bot
authored andcommitted
Always use two XNNPACK Partitioners (#5573)
Summary: Pull Request resolved: #5573 This changes the default behavior. Helps prefill ~20%, hurts decode ~7%. As a next step, I will try to debug more into perf regression on decode and if anything more we can get on prefill by tuning xnnpack thread dispatcher for gemm, gemv, mul, add, sigmoid, and sub. **On my local (unreliable) S23** - * Vanilla: ``` dm1q:/data/local/tmp/llama $ ./llama_main_release \ --model_path ./llama_gs32_vanilla.pte \ --tokenizer_path ./tokenizer.bin \ --seq_len=128 \ --prompt="${prompt}" [...] I 00:00:22.188618 executorch:stats.h:84] Prompt Tokens: 44 Generated Tokens: 83 I 00:00:22.188621 executorch:stats.h:90] Model Load Time: 12.922000 (seconds) I 00:00:22.188624 executorch:stats.h:100] Total inference time: 9.252000 (seconds) Rate: 8.971033 (tokens/second) I 00:00:22.188627 executorch:stats.h:108] Prompt evaluation: 1.740000 (seconds) Rate: 25.287356 (tokens/second) I 00:00:22.188630 executorch:stats.h:119] Generated 83 tokens: 7.512000 (seconds) Rate: 11.048988 (tokens/second) I 00:00:22.188632 executorch:stats.h:127] Time to first generated token: 1.740000 (seconds) I 00:00:22.188634 executorch:stats.h:134] Sampling time over 127 tokens: 0.015000 (seconds) [...] ``` * Two partition (2part) ``` dm1q:/data/local/tmp/llama $ ./llama_main_release \ --model_path ./llama_gs32_2part.pte \ # New PTE --tokenizer_path ./tokenizer.bin \ --seq_len=128 \ --prompt="${prompt}" [...] I 00:00:22.205058 executorch:stats.h:84] Prompt Tokens: 44 Generated Tokens: 83 I 00:00:22.205061 executorch:stats.h:90] Model Load Time: 12.876000 (seconds) I 00:00:22.205063 executorch:stats.h:100] Total inference time: 9.323000 (seconds) Rate: 8.902714 (tokens/second) I 00:00:22.205067 executorch:stats.h:108] Prompt evaluation: 1.549000 (seconds) Rate: 28.405423 (tokens/second) I 00:00:22.205070 executorch:stats.h:119] Generated 83 tokens: 7.774000 (seconds) Rate: 10.676614 (tokens/second) I 00:00:22.205073 executorch:stats.h:127] Time to first generated token: 1.549000 (seconds) I 00:00:22.205075 executorch:stats.h:134] Sampling time over 127 tokens: 0.029000 (seconds) [...] ``` **Similar results on AiBench OnePlus12**, * Vanilla, AiBench Links: [gs=32](https://www.internalfb.com/intern/aibench/details/114258284562772), [gs=256](https://www.internalfb.com/intern/aibench/details/438103192423336) ``` # gs=32 I 00:00:21.792659 executorch:stats.h:84] Prompt Tokens: 5 Generated Tokens: 118 I 00:00:21.792721 executorch:stats.h:90] Model Load Time: 11.666000 (seconds) I 00:00:21.792754 executorch:stats.h:100] Total inference time: 10.109000 (seconds) Rate: 11.672767 (tokens/second) I 00:00:21.792778 executorch:stats.h:108] Prompt evaluation: 0.365000 (seconds) Rate: 13.698630 (tokens/second) I 00:00:21.792799 executorch:stats.h:119] Generated 118 tokens: 9.744000 (seconds) Rate: 12.110016 (tokens/second) I 00:00:21.792818 executorch:stats.h:127] Time to first generated token: 0.365000 (seconds) I 00:00:21.792837 executorch:stats.h:134] Sampling time over 123 tokens: 0.008000 (seconds) ``` * Two partition, AiBench Links: [gs=32](https://www.internalfb.com/intern/aibench/details/852029802754424), [gs=256](https://www.internalfb.com/intern/aibench/details/491722732991273) ``` # gs=32 I 00:00:22.584271 executorch:stats.h:84] Prompt Tokens: 5 Generated Tokens: 118 I 00:00:22.584336 executorch:stats.h:90] Model Load Time: 11.610000 (seconds) I 00:00:22.584367 executorch:stats.h:100] Total inference time: 10.960000 (seconds) Rate: 10.766423 (tokens/second) I 00:00:22.584389 executorch:stats.h:108] Prompt evaluation: 0.286000 (seconds) Rate: 17.482517 (tokens/second) I 00:00:22.584409 executorch:stats.h:119] Generated 118 tokens: 10.674000 (seconds) Rate: 11.054900 (tokens/second) I 00:00:22.584428 executorch:stats.h:127] Time to first generated token: 0.286000 (seconds) I 00:00:22.584446 executorch:stats.h:134] Sampling time over 123 tokens: 0.013000 (seconds) ``` Differential Revision: D63271101
1 parent 72245c3 commit 194cabb

File tree

2 files changed

+35
-8
lines changed

2 files changed

+35
-8
lines changed

examples/models/llama2/export_llama_lib.py

Lines changed: 17 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -297,7 +297,12 @@ def build_args_parser() -> argparse.ArgumentParser:
297297

298298
parser.add_argument("-2", "--fairseq2", action="store_true")
299299
parser.add_argument("-v", "--verbose", action="store_true")
300-
parser.add_argument("-X", "--xnnpack", action="store_true")
300+
parser.add_argument(
301+
"-X",
302+
"--xnnpack",
303+
action="store_true",
304+
help="Delegate to xnnpack backend, by default uses two XNNPACK partitioners",
305+
)
301306
parser.add_argument("-V", "--vulkan", action="store_true")
302307
parser.add_argument("--mps", action="store_true")
303308
parser.add_argument("--coreml", action="store_true")
@@ -529,7 +534,13 @@ def _export_llama(modelname, args) -> LLMEdgeManager: # noqa: C901
529534
modelname = f"xnnpack_dq_{modelname}"
530535

531536
if args.xnnpack:
532-
partitioners.append(get_xnnpack_partitioner())
537+
# Order matters here, dynamic quantization should be applied first.
538+
partitioners.append(
539+
get_xnnpack_partitioner(dynamic_quant_only_partitioner=True)
540+
)
541+
partitioners.append(
542+
get_xnnpack_partitioner(dynamic_quant_only_partitioner=False)
543+
)
533544
modelname = f"xnnpack_{modelname}"
534545

535546
if args.vulkan:
@@ -576,6 +587,10 @@ def _export_llama(modelname, args) -> LLMEdgeManager: # noqa: C901
576587
shares=args.num_sharding,
577588
)
578589

590+
logging.info("Lowering model using following partitioner(s): ")
591+
for partitioner in partitioners:
592+
logging.info(f"--> {partitioner.__class__.__name__}")
593+
579594
if args.generate_etrecord:
580595
if not builder_exported_to_edge.edge_manager:
581596
raise ValueError("Unable to generate etrecord due to missing edge manager.")

extension/llm/export/partitioner_lib.py

Lines changed: 18 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,16 +7,28 @@
77
from typing import Optional
88

99

10-
def get_xnnpack_partitioner():
10+
def get_xnnpack_partitioner(dynamic_quant_only_partitioner: bool = True):
11+
"""
12+
Returns the XNNPACK partitioner.
13+
14+
@arg dynamic_quant_only_partitioner:
15+
This is enabled by default to keep BC.
16+
If dynamic_quant_only_partitioner is True, then only dynamically quantized
17+
linear layers will be partitioned.
18+
Else, anything which can be will be partitioned greedily.
19+
"""
1120
from executorch.backends.xnnpack.partition.xnnpack_partitioner import (
1221
XnnpackDynamicallyQuantizedPartitioner,
22+
XnnpackPartitioner,
1323
)
1424

15-
# Following changes due to.
16-
# 1. We need dynamically quantized partitioner for both pt2e_quantize options
17-
# as well as "qmode 8da4w" which is also dynamic quantizes linear layers.
18-
# 2. XNNPACK partitioner seems to result in seg fault for non dqlinear ops.
19-
return XnnpackDynamicallyQuantizedPartitioner()
25+
if dynamic_quant_only_partitioner:
26+
# Following changes due to.
27+
# 1. We need dynamically quantized partitioner for both pt2e_quantize options
28+
# as well as "qmode 8da4w" which is also dynamic quantizes linear layers.
29+
# 2. XNNPACK partitioner seems to result in seg fault for non dqlinear ops.
30+
return XnnpackDynamicallyQuantizedPartitioner()
31+
return XnnpackPartitioner()
2032

2133

2234
def get_vulkan_partitioner(

0 commit comments

Comments
 (0)