-
Couldn't load subscription status.
- Fork 706
Update addmm int16 for Ethos-U85 #14714
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14714
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 6e5764c with merge base 53ccfd0 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "release notes: none" |
Summary: Adjust op_bmm to allow int16 types with int48 output buffer Note: I am rescaling back to the original int16 dtype output. This is obviously dangerous if done without a properly calibrated quantization parameter, but this is our base assumption. Differential Revision: D83627934
Summary: Adjust op_bmm to allow int16 types with int48 output buffer Note: I am rescaling back to the original int16 dtype output. This is obviously dangerous if done without a properly calibrated quantization parameter, but this is our base assumption. Differential Revision: D83627934
3691a40 to
56b10c4
Compare
Summary: Adjust op_bmm to allow int16 types with int48 output buffer Note: I am rescaling back to the original int16 dtype output. This is obviously dangerous if done without a properly calibrated quantization parameter, but this is our base assumption. Reviewed By: digantdesai Differential Revision: D83627934
56b10c4 to
cbc8093
Compare
Summary: Adjust op_bmm to allow int16 types with int48 output buffer Note: I am rescaling back to the original int16 dtype output. This is obviously dangerous if done without a properly calibrated quantization parameter, but this is our base assumption. Reviewed By: digantdesai Differential Revision: D83627934
cbc8093 to
ce36955
Compare
Summary: Adjust op_bmm to allow int16 types with int48 output buffer Note: I am rescaling outputs back to the original int16 dtype output. This is obviously dangerous if done without a properly calibrated quantization parameter, but this is our base assumption. bypass-github-export-checks bypass-github-pytorch-ci-checks bypass-github-executorch-ci-checks Reviewed By: digantdesai Differential Revision: D83627934
ce36955 to
6e5764c
Compare
Summary:
Adjust op_bmm to allow int16 types with int48 output buffer
Note: I am rescaling back to the original int16 dtype output. This is obviously dangerous if done without a properly calibrated quantization parameter, but this is our base assumption.
Differential Revision: D83627934