You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary:
Adjust op_bmm to allow int16 types with int48 output buffer
Note: I am rescaling back to the original int16 dtype output. This is obviously dangerous if done without a properly calibrated quantization parameter, but this is our base assumption.
Differential Revision: D83627934
reason="missing int16 addmm ops support; fails at TOSA reference model with Unsupported operation type or rank. See: https://github.com/pytorch/executorch/issues/13979"
0 commit comments