-
Notifications
You must be signed in to change notification settings - Fork 751
Cortex_m backend: Add IO quantizers + tests of non rescaling ops #15590
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
A number of ops only handles shape/meta-data without changing the dynamic range. In these cases, no rescaling needs to be performed and the int8 portable_ops kernel can be used directly. A new test is added to ensure this behaviour, as well as a test showing how operators which does change the dynamic range (SUB) are not supported. To support quantization of graphs with no-rescale ops in the beginning/ end of the graph, two new quantizers InputQuantizer and OutputQuantizer are introduced. By explicitly stating the dtpye of the input/output, no-rescale ops inherit dtypes from them as with any other op. Signed-off-by: Adrian Lundell <[email protected]> Change-Id: I8f79b86b633f9ad8d9f183c914754b0ee2f7a87c
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15590
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (5 Unrelated Failures)As of commit 094b370 with merge base 993254c ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
BROKEN TRUNK - The following jobs failed but was present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Is this the idea in general ? // Current state: // Annotate (This PR): // What later passes will do: |
Fix a merge issue causing the build to fail + update tests after merging of pytorch#15590 Signed-off-by: Adrian Lundell <[email protected]>
…orch#15590) A number of ops only handles shape/meta-data without changing the dynamic range. In these cases, no rescaling needs to be performed and the int8 portable_ops kernel can be used directly. A new test is added to ensure this behaviour, as well as a test showing how operators which does change the dynamic range (SUB) are not supported. To support quantization of graphs with no-rescale ops in the beginning/ end of the graph, two new quantizers InputQuantizer and OutputQuantizer are introduced. By explicitly stating the dtpye of the input/output, no-rescale ops inherit dtypes from them as with any other op. Signed-off-by: Adrian Lundell <[email protected]>
A number of ops only handles shape/meta-data without changing the dynamic range. In these cases, no rescaling needs to be performed and the int8 portable_ops kernel can be used directly.
A new test is added to ensure this behaviour, as well as a test showing how operators which does change the dynamic range (SUB) are not supported.
To support quantization of graphs with no-rescale ops in the beginning/ end of the graph, two new quantizers InputQuantizer and OutputQuantizer are introduced. By explicitly stating the dtpye of the input/output, no-rescale ops inherit dtypes from them as with any other op.
cc @freddan80 @per @zingo @oscarandersson8218 @digantdesai