-
Notifications
You must be signed in to change notification settings - Fork 734
2/n Enable 16-bit activations and 8-bit weights in Cadence Quantizer for linear #15901
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15901
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 New Failures, 1 Unrelated FailureAs of commit 40d03e1 with merge base d2c011e ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds support for 16-bit activations with 8-bit weights in the Cadence Quantizer for linear operations, continuing from a previous differential (D84284794). The key changes enable mixed precision quantization where activations use int16 while weights use int8, addressing the limitation where both had to use the same dtype.
- Adds int16 activation + int8 weight support by delegating to generic implementations from
Jarvis/min_runtime/operators - Introduces new build dependencies to support the int16 templating
- Adds test coverage for int16 activation scenarios
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
| backends/cadence/hifi/operators/op_quantized_linear_out.cpp | Adds conditional branches to handle int16 activations with int8 weights by calling generic implementations, and removes __ET_UNUSED markers from now-used parameters |
| backends/cadence/hifi/operators/targets.bzl | Adds dependencies on Jarvis min_runtime operators to support int16 templating and removes quantized_linear_out from the main operator list |
| backends/cadence/hifi/operators/tests/test_op_quantized_linear_out.cpp | Adds new test file with int16 activation test case for quantized_linear_out |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| using std::string_view; | ||
|
|
||
| class HiFiQuantizedLinearTest : public OperatorTest { | ||
| public: |
Copilot
AI
Nov 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The public: access specifier is redundant as there are no public members declared. Consider removing this line to improve code clarity.
| public: |
| out_zero_point, | ||
| offset, | ||
| out | ||
| ); |
Copilot
AI
Nov 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing return statement after handling the int16 activation case. The function will continue to execute the subsequent else if branches even after successfully handling the Short/Char case, potentially leading to incorrect behavior or error messages. Add a return; statement after line 236.
| ); | |
| ); | |
| return; |
| out_zero_point, | ||
| offset, | ||
| out | ||
| ); |
Copilot
AI
Nov 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing return statement after handling the int16 activation case. The function will continue to execute the subsequent else if branches even after successfully handling the Short/Char case, potentially leading to incorrect behavior or error messages. Add a return; statement after line 295.
| ); | |
| ); | |
| return; |
| int64_t out_multiplier, | ||
| int64_t out_shift, | ||
| int64_t out_zero_point, | ||
| __ET_UNUSED const optional<Tensor>& offset, |
Copilot
AI
Nov 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The __ET_UNUSED marker should be removed from both the ctx and offset parameters since they are now being used in the int16 activation case (lines 284, 293). This creates an inconsistency with quantized_linear_out (line 211, 220) where these markers have already been removed.
| define_operator(op) | ||
|
|
||
| # quantized_linear_out and quantized_linear_per_tensor_out needs additional dependency for int16 support | ||
| define_operator("quantized_linear_out", deps=["fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:quantize_linear_out", "fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:headers",]) |
Copilot
AI
Nov 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the trailing comma after the closing bracket on line 126. While Python allows trailing commas in lists, this is inconsistent with the style used elsewhere in the codebase and may cause issues with some linters or build tools.
| define_operator("quantized_linear_out", deps=["fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:quantize_linear_out", "fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:headers",]) | |
| define_operator("quantized_linear_out", deps=["fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:quantize_linear_out", "fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:headers"]) |
|
|
||
| # quantized_linear_out and quantized_linear_per_tensor_out needs additional dependency for int16 support | ||
| define_operator("quantized_linear_out", deps=["fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:quantize_linear_out", "fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:headers",]) | ||
| define_operator("quantized_linear_per_tensor_out", deps=["fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:quantize_linear_out", "fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:headers",]) |
Copilot
AI
Nov 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the trailing comma after the closing bracket on line 127. While Python allows trailing commas in lists, this is inconsistent with the style used elsewhere in the codebase and may cause issues with some linters or build tools.
| define_operator("quantized_linear_per_tensor_out", deps=["fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:quantize_linear_out", "fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:headers",]) | |
| define_operator("quantized_linear_per_tensor_out", deps=["fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:quantize_linear_out", "fbcode//on_device_ai/Assistant/Jarvis/min_runtime/operators:headers"]) |
9e4a0d9 to
0bfea2e
Compare
…for linear (pytorch#15901) Summary: # Context We continue from D84284794 to add support for 16-bit activations. Note that right now, all though they support 16-bit activations already, it's only if the weights are also 16-bits. To do this, we need to change the way we template some functions. # Current Behavior Right now, we're composing two macros together, the `ET_FORALL_JARVIS_QUANTIZED_TYPES_WITH_INT16` macro: https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/operators.h?lines=22-25 and the function macro(`quantized_linear` chosen for example): https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/quantized_linear_out.cpp?lines=30-41 so together, it just becomes a switch statement, calling the `quantized_linear` function with the correct template parameter. However, note that it assumes that both the input activations and weights are the same dtype, which is not the case. # This Diff We fix the generic implementation by allowing there to be two generics, one for the weight and one for the input activations. Reviewed By: hsharma35 Differential Revision: D86538176
…for linear (pytorch#15901) Summary: # Context We continue from D84284794 to add support for 16-bit activations. Note that right now, all though they support 16-bit activations already, it's only if the weights are also 16-bits. To do this, we need to change the way we template some functions. # Current Behavior Right now, we're composing two macros together, the `ET_FORALL_JARVIS_QUANTIZED_TYPES_WITH_INT16` macro: https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/operators.h?lines=22-25 and the function macro(`quantized_linear` chosen for example): https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/quantized_linear_out.cpp?lines=30-41 so together, it just becomes a switch statement, calling the `quantized_linear` function with the correct template parameter. However, note that it assumes that both the input activations and weights are the same dtype, which is not the case. # This Diff We fix the generic implementation by allowing there to be two generics, one for the weight and one for the input activations. Reviewed By: hsharma35 Differential Revision: D86538176
…for linear (pytorch#15901) Summary: # Context We continue from D84284794 to add support for 16-bit activations. Note that right now, all though they support 16-bit activations already, it's only if the weights are also 16-bits. To do this, we need to change the way we template some functions. # Current Behavior Right now, we're composing two macros together, the `ET_FORALL_JARVIS_QUANTIZED_TYPES_WITH_INT16` macro: https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/operators.h?lines=22-25 and the function macro(`quantized_linear` chosen for example): https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/quantized_linear_out.cpp?lines=30-41 so together, it just becomes a switch statement, calling the `quantized_linear` function with the correct template parameter. However, note that it assumes that both the input activations and weights are the same dtype, which is not the case. # This Diff We fix the generic implementation by allowing there to be two generics, one for the weight and one for the input activations. Reviewed By: hsharma35 Differential Revision: D86538176
0bfea2e to
40d03e1
Compare
Summary:
Context
We continue from D84284794 to add support for 16-bit activations. Note that right now, all though they support 16-bit activations already, it's only if the weights are also 16-bits. To do this, we need to change the way we template some functions.
Current Behavior
Right now, we're composing two macros together, the
ET_FORALL_JARVIS_QUANTIZED_TYPES_WITH_INT16macro:https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/operators.h?lines=22-25
and the function macro(
quantized_linearchosen for example):https://www.internalfb.com/code/fbsource/[9e8c6d8466107f58aa3de1b9e4ec71c49d670a8f]/fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/generic/quantized_linear_out.cpp?lines=30-41
so together, it just becomes a switch statement, calling the
quantized_linearfunction with the correct template parameter.However, note that it assumes that both the input activations and weights are the same dtype, which is not the case.
This Diff
We fix the generic implementation by allowing there to be two generics, one for the weight and one for the input activations.
Reviewed By: hsharma35
Differential Revision: D86538176