Skip to content

Conversation

@pytorchbot
Copy link
Collaborator

This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #9531 by @SS-JIA
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/200/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/200/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/main
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/200/orig
@diff-train-skip-merge

Pull Request resolved: #9531

## Context


Currently, for the `q_8w_linear` shader, both the texture and the buffer variants use the same global work group and local work group setting.

Specially, the global work group is set to `{out.numel(), 1, 1}` and the local work group is set to `{64, 1, 1}`.

However, I believe this results in a very poor memory re-use for the texture shader. In this configuration:

* Within a work group each invocation will be requesting a different row of A - 64 rows of A requested in total
* All work groups will be requesting the same row of B
* One work group will load 65 unique rows from A and B

Compare this to a local work group size of `{8, 8, 1}`

* Across the work group, 8 rows will be loaded from A and 8 rows will be loaded from B
* One work group will load 16 unique rows total from A and B

Evidently, there is better memory re-use in the latter work group as fewer unique rows are loaded.

## Changes

Modify the `q_8w_linear` shader to use `{8, 8, 1}` local wg if possible. If `M` is small, then instead use `{4, 16, 1}` or `{2, 32, 1}` to reduce the number of inactive invocations.
ghstack-source-id: 274260277
@exported-using-ghexport

Differential Revision: [D71706489](https://our.internmc.facebook.com/intern/diff/D71706489/)
@pytorchbot pytorchbot requested a review from SS-JIA as a code owner March 26, 2025 22:17
@pytorch-bot
Copy link

pytorch-bot bot commented Mar 26, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/9664

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 2 Pending

As of commit 58036f8 with merge base 7159650 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 26, 2025
@github-actions
Copy link

This PR needs a release notes: label

If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@kirklandsign kirklandsign merged commit 9fc101f into main Mar 26, 2025
80 of 82 checks passed
@kirklandsign kirklandsign deleted the gh/SS-JIA/200/orig branch March 26, 2025 23:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants