Skip to content

Commit 1c318c6

Browse files
committed
Update on "Arm backend: Add 16A8W FCNode support with BMM dependency fix"
Add 16A8W quantization support for FCNode operations with BMM dependency fix in ExecutorTorch ARM backend. This follows the pattern established for linear, mul, sigmoid, tanh, slice, view/transpose, and cat operations, extending int16 support to FCNode operations. Changes: - Add INT16 dtype validation support in op_bmm.py - Add test_addmm_tensor_16a8w_tosa_INT test function - Enable test_addmm.py in test targets configuration - Fix BMM dependency for FCNode operations The 16A8W configuration uses 16-bit activations with 8-bit weights, enabling higher precision for activations while maintaining weight efficiency. Differential Revision: [D80512504](https://our.internmc.facebook.com/intern/diff/D80512504/) cc digantdesai freddan80 per zingo oscarandersson8218 [ghstack-poisoned]
2 parents d64c479 + 9611fd1 commit 1c318c6

File tree

0 file changed

+0
-0
lines changed

    0 file changed

    +0
    -0
    lines changed

    0 commit comments

    Comments
     (0)