Skip to content

Conversation

@pytorchbot
Copy link
Collaborator

@pytorchbot pytorchbot commented Jul 14, 2025

This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #12201 by @ahmtox
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/ahmtox/29/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/ahmtox/29/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/gh/ahmtox/28/orig
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/ahmtox/29/orig
@diff-train-skip-merge

cc @SS-JIA @manuelcandales @cbilgin

morelos added 3 commits July 13, 2025 21:36
Pull Request resolved: #12199

# Context

A few operators have been recently created, namely:
- quantize_per_tensor
- quantize_per_token
- dequantize_per_tensor
- dequantize_per_token
- choose_qparams.tensor
- choose_qparams_per_token_asymmetric

They don't have a namespace associated with them, and since we are trying to align with the ATen implementation in their respective quantized_decomposed namespace, this diff is necessary to align in that regard. Furthermore, our operators need to match inputs with the ATen version, so we also pass dtypes.

# Changes

The primary change is adding the namespace quantized_decomposed to all the above named operators. Furthermore, we change the testing framework to pass dummy dtypes that is expected for the ATen implementation. We also change the `choose_qparams` logic to properly pass the eps, since this is actually a relevant variable and cannot be set by default, despite the existing op_quantize cpu reference in executorch not distinctly using this variable.
ghstack-source-id: 295972783
@exported-using-ghexport

Differential Revision: [D77746144](https://our.internmc.facebook.com/intern/diff/D77746144/)
Pull Request resolved: #12200

# Context

Certain quantization operators need scales and zeros to be set with a storage layout as buffers. Since the existing op_registry does not allow specifying how input parameters are set with their memory or storage layout, we need to specify that the optimal storage type is buffer so that is conversion pass is added to ensure that the inputs are also buffers.

# Changes

This moves the quantized_decomposed operators in their own registration, while also specifying that buffer is preferred.
ghstack-source-id: 295972779
@exported-using-ghexport

Differential Revision: [D77746131](https://our.internmc.facebook.com/intern/diff/D77746131/)
Pull Request resolved: #12201

# Context

We need this conversion so that certain operators can handle floating point values that need to be 64bit. This is predominantly applicable to choose_qparams.tensor where it expects a 64bit output.

# Changes

Simply adding an additional conversion for float64 to vulkan fp32.
ghstack-source-id: 295972781
@exported-using-ghexport

Differential Revision: [D77746137](https://our.internmc.facebook.com/intern/diff/D77746137/)
@pytorchbot pytorchbot requested a review from SS-JIA as a code owner July 14, 2025 14:58
@pytorch-bot pytorch-bot bot added the module: vulkan Issues related to the Vulkan delegate and code under backends/vulkan/ label Jul 14, 2025
@pytorch-bot
Copy link

pytorch-bot bot commented Jul 14, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12430

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 14, 2025
Base automatically changed from gh/ahmtox/28/orig to main July 14, 2025 15:14
@SS-JIA SS-JIA merged commit bd48732 into main Jul 14, 2025
94 of 95 checks passed
@SS-JIA SS-JIA deleted the gh/ahmtox/29/orig branch July 14, 2025 15:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. module: vulkan Issues related to the Vulkan delegate and code under backends/vulkan/

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants