-
Notifications
You must be signed in to change notification settings - Fork 749
NXP backend: Fix shared quantization bugs. #13844
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NXP backend: Fix shared quantization bugs. #13844
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13844
Note: Links to docs will display an error until the docs builds have been completed. ❌ 5 New Failures, 1 Cancelled JobAs of commit 30316cd with merge base 2a06efb ( NEW FAILURES - The following jobs have failed:
CANCELLED JOB - The following job was cancelled. Please retry:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "module: nxp" "release notes: nxp" |
ed65598 to
17a6004
Compare
17a6004 to
30316cd
Compare
robert-kalmar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The failures are unrelated to this PR.
### Summary Fix 2 bugs related to quantization parameters that are shared between multiple tensors/nodes: - Turn off bias tensor reuse in Convolution converter - Fix _has_shared_q_params_if_quantized in Node converter ### Test plan No direct unit tests are provided. Correct functionality is tested by all tests with quantized nodes. --- Co-authored-by: Roman Janik <[email protected]>
Summary
Fix 2 bugs related to quantization parameters that are shared between multiple tensors/nodes:
Test plan
No direct unit tests are provided. Correct functionality is tested by all tests with quantized nodes.
cc @robert-kalmar @roman-janik-nxp @StrycekSimon @jirioc