You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update on "[Executorch][optimized] Fix op_div impl to use portable for fallback path"
Earlier we just copy pasted from portable impl. This diff refactors portable to
make it usable from optimized lib. As a result we get all the size reduction
benefit from build and size optimizations landed in portable.
Differential Revision: [D65606665](https://our.internmc.facebook.com/intern/diff/D65606665/)
[ghstack-poisoned]
"quantized_conv.per_tensor(Tensor input, Tensor weight, Tensor bias, int[] stride, SymInt[] padding, int[] dilation, int groups, int input_zero_point, int weight_zero_point, float bias_scale, float out_scale, int out_zero_point, int out_multiplier, int out_shift, bool channel_last=False) -> (Tensor Z)"
71
+
)
72
+
lib.define(
73
+
"quantized_conv.per_tensor_out(Tensor input, Tensor weight, Tensor bias, int[] stride, SymInt[] padding, int[] dilation, int groups, int input_zero_point, int weight_zero_point, float bias_scale, float out_scale, int out_zero_point, int out_multiplier, int out_shift, bool channel_last=False, *, Tensor(a!) out) -> Tensor(a!)"
74
+
)
69
75
70
76
lib.define(
71
77
"quantized_matmul(Tensor X, int X_zero_point, Tensor Y, int Y_zero_point, Tensor? bias, int out_multiplier, int out_shift, int out_zero_point, bool transposed=False) -> (Tensor Z)"
0 commit comments