-
Notifications
You must be signed in to change notification settings - Fork 688
[ExecuTorch] Quantized fast hadamard transform #5284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Demonstrate that we can calculate a quantized fast hadamard transform with integer math only, except for adjusting the scale of the result. (Not sure if there is a reason to actually commit this -- do we have a use case for quantized FHT on CPU?) Differential Revision: [D60866280](https://our.internmc.facebook.com/intern/diff/D60866280/) **NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D60866280/)! [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/5284
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 69451c8 with merge base 6328d41 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D60866280 |
Demonstrate that we can calculate a quantized fast hadamard transform with integer math only, except for adjusting the scale of the result. (Not sure if there is a reason to actually commit this -- do we have a use case for quantized FHT on CPU?) Differential Revision: [D60866280](https://our.internmc.facebook.com/intern/diff/D60866280/) **NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D60866280/)! [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D60866280 |
Demonstrate that we can calculate a quantized fast hadamard transform with integer math only, except for adjusting the scale of the result. (Not sure if there is a reason to actually commit this -- do we have a use case for quantized FHT on CPU?) Differential Revision: [D60866280](https://our.internmc.facebook.com/intern/diff/D60866280/) **NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D60866280/)! [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D60866280 |
Demonstrate that we can calculate a quantized fast hadamard transform with integer math only, except for adjusting the scale of the result. (Not sure if there is a reason to actually commit this -- do we have a use case for quantized FHT on CPU?) Differential Revision: [D60866280](https://our.internmc.facebook.com/intern/diff/D60866280/) **NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D60866280/)! [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D60866280 |
This pull request has been merged in 327a5b6. |
Pull Request resolved: pytorch/executorch#5284 Demonstrate that we can calculate a quantized fast hadamard transform with integer math only, except for adjusting the scale of the result. (Not sure if there is a reason to actually commit this -- do we have a use case for quantized FHT on CPU?) Differential Revision: [D60866280](https://our.internmc.facebook.com/intern/diff/D60866280/) **NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D60866280/)! ghstack-source-id: 242230778
Stack from ghstack (oldest at bottom):
Demonstrate that we can calculate a quantized fast hadamard transform with integer math only, except for adjusting the scale of the result. (Not sure if there is a reason to actually commit this -- do we have a use case for quantized FHT on CPU?)
Differential Revision: D60866280
NOTE FOR REVIEWERS: This PR has internal Meta-specific changes or comments, please review them on Phabricator!