-
Couldn't load subscription status.
- Fork 700
[Executorch][llm] Make mask tensor float only for sdpa #12131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Executorch][llm] Make mask tensor float only for sdpa #12131
Conversation
Now that we support quantized sdpa query tensor can be quantized and attention mask can be float (the only type allowed). So this check doesnt make sense anymore. Differential Revision: [D77516821](https://our.internmc.facebook.com/intern/diff/D77516821/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12131
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 1 Cancelled JobAs of commit 15b254c with merge base cf0bfd2 ( NEW FAILURES - The following jobs have failed:
CANCELLED JOB - The following job was cancelled. Please retry:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D77516821 |
This PR needs a
|
24bf2d3
into
gh/kimishpatel/195/base
This PR was created by the merge bot to help merge the original PR into the main branch. ghstack PR number: #12131 by @kimishpatel ^ Please use this as the source of truth for the PR details, comments, and reviews ghstack PR base: https://github.com/pytorch/executorch/tree/gh/kimishpatel/195/base ghstack PR head: https://github.com/pytorch/executorch/tree/gh/kimishpatel/195/head Merge bot PR base: https://github.com/pytorch/executorch/tree/gh/kimishpatel/194/orig Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/kimishpatel/195/orig @diff-train-skip-merge --------- Co-authored-by: Kimish Patel <[email protected]>
This PR was created by the merge bot to help merge the original PR into the main branch. ghstack PR number: pytorch#12131 by @kimishpatel ^ Please use this as the source of truth for the PR details, comments, and reviews ghstack PR base: https://github.com/pytorch/executorch/tree/gh/kimishpatel/195/base ghstack PR head: https://github.com/pytorch/executorch/tree/gh/kimishpatel/195/head Merge bot PR base: https://github.com/pytorch/executorch/tree/gh/kimishpatel/194/orig Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/kimishpatel/195/orig @diff-train-skip-merge --------- Co-authored-by: Kimish Patel <[email protected]>
Stack from ghstack (oldest at bottom):
Now that we support quantized sdpa query tensor can be quantized and attention mask can be float (the only type allowed).
So this check doesnt make sense anymore.
Differential Revision: D77516821