Skip to content

Conversation

@pytorchbot
Copy link
Collaborator

This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #15577 by @SS-JIA
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/348/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/348/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/347/orig
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/348/orig
Differential Revision: D86226135
@diff-train-skip-merge

ssjia added 2 commits November 4, 2025 11:52
Title says it all! Use the new combos codegen API which makes it easier to express generating storage type combinations.

Differential Revision: [D86226138](https://our.internmc.facebook.com/intern/diff/D86226138/)

ghstack-source-id: 320850476
Pull Request resolved: #15576
Title says it all!

The purpose of this diff is twofold:

1. Test SDPA as both a fused operator (sdpa_with_kv_cache) and decomposed update_cache and custom_sdpa ops in order to detect possible regressions with being able to support older models
2. Make it easier to debug issues with SDPA by exposing a mode that tests only the attention weight computation.
Title says it all! Update SDPA op to use buffer storage for cache tensors if projected tensors are buffer.

Also included is a small change to ensure that cache tensors use the same storage type as input tensors.

Differential Revision: [D86226135](https://our.internmc.facebook.com/intern/diff/D86226135/)

ghstack-source-id: 320850473
Pull Request resolved: #15577
@pytorchbot pytorchbot requested a review from SS-JIA as a code owner November 5, 2025 16:34
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 5, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15598

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 31 Pending

As of commit cf49a75 with merge base bde6b11 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 5, 2025
Base automatically changed from gh/SS-JIA/347/orig to main November 5, 2025 17:00
@SS-JIA SS-JIA merged commit 67af512 into main Nov 5, 2025
139 of 152 checks passed
@SS-JIA SS-JIA deleted the gh/SS-JIA/348/orig branch November 5, 2025 17:01
abhinaykukkadapu pushed a commit to abhinaykukkadapu/executorch that referenced this pull request Nov 6, 2025
…ytorch#15598)

Title says it all!

The purpose of this diff is twofold:

1. Test SDPA as both a fused operator (sdpa_with_kv_cache) and decomposed update_cache and custom_sdpa ops in order to detect possible regressions with being able to support older models
2. Make it easier to debug issues with SDPA by exposing a mode that tests only the attention weight computation.
Title says it all! Update SDPA op to use buffer storage for cache tensors if projected tensors are buffer.

Also included is a small change to ensure that cache tensors use the same storage type as input tensors.

Differential Revision: [D86226135](https://our.internmc.facebook.com/intern/diff/D86226135/)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants