Skip to content

Conversation

@mcr229
Copy link
Contributor

@mcr229 mcr229 commented Aug 8, 2025

Currently, we only leverage KleidiAI kernels for Dynamically Quantized Activations with 4 bit blockwise weights on linear layers. This has seen a lot of success in our LLM Prefill performance.

However KleidiAI has also integrated into other kernels for XNNPACK. Specifically 4 bit channelwise weights, and 8 bit channelwise weights. We should attempt to use their kernels for these Linear schemes as well. This should have an effect on some example models we have like:

  • ViT
  • MobileBert
  • W2L
  • Emformer

And in general other models that can do 8 bit channelwise quantization. (We don't support 4 bit channelwise quantization atm).

Performance

Android S24 (6 Threads) (10 Runs)

On Android S24, we see a nice perf uplift using the KleidiAI's activation packing and QD8_QC8W gemm kernels. Specifically on the ViT model we see ~8% (58.61ms --> 53.6948ms). You can see the difference in the GEMM performance by looking at the operator profiling below.
Consider event 834. This is a Fully Connected Layer:

  • Without Kleidi we do QD8 (no activation packing), and the p50 timing is around 0.75ms.
  • With Kleidi we do QP8 (activation packing), and the p50 timing is around 0.6048ms. This is ~20% uplift on GEMMs!

Profiles:

Macbook (6 Threads) (10 Runs)

On macbook, we see a different story. With KleidiAI, we see a dip in perf: (49.32ms --> 56.53ms) which is around a ~14% drop.

Let's take a look at the Fully Connected Layers again specifically event 834 again:

  • Without Kleidi the p50 timing is 0.4935ms
  • With Kleidi the p50 Timing is 0.648ms

This is a 24% dip in GEMM Performance!

Profiles

[ghstack-poisoned]
@mcr229
Copy link
Contributor Author

mcr229 commented Aug 8, 2025

@mcr229 mcr229 requested a review from digantdesai as a code owner August 8, 2025 18:54
@pytorch-bot
Copy link

pytorch-bot bot commented Aug 8, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13232

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 1c33efb with merge base a84b3c9 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 8, 2025
@github-actions
Copy link

github-actions bot commented Aug 8, 2025

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

mcr229 added 3 commits August 8, 2025 12:27
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
mcr229 added 3 commits August 11, 2025 16:49
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
Copy link
Contributor

@digantdesai digantdesai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. Perf numbers? IIRC you said this didn't result in perf uplift? Stamping if I am remembering it wrong.

@mcr229
Copy link
Contributor Author

mcr229 commented Aug 12, 2025

Sounds good. Perf numbers? IIRC you said this didn't result in perf uplift? Stamping if I am remembering it wrong.

collecting them now. I realized i was running with debug mode on, so the perf numbers weren't representative.

@meta-cla
Copy link

meta-cla bot commented Aug 16, 2025

Hi @mcr229!

Thank you for your pull request.

We require contributors to sign our Contributor License Agreement, and yours needs attention.

You currently have a record in our system, but the CLA is no longer valid, and will need to be resubmitted.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@github-actions
Copy link

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the stale PRs inactive for over 60 days label Oct 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. stale PRs inactive for over 60 days

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants