-
Notifications
You must be signed in to change notification settings - Fork 680
Removed support for non-per-tensor quantized relu #14788
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary: Continued support for custom cadence ops. Reviewed By: hsharma35, eigen-k Differential Revision: D83709868
Summary: The default for padding was incorrect, adding default with the correct dtype Differential Revision: D83873533
…ariants Summary: Fix to just call the per tensor variants for quantized conv and quantized relu, since those are the only ones we are supporting. Differential Revision: D83873738
Summary: The original pass didn't fetch the user-provided zero point if it existed, it just assumed a hard-coded zero point. Fixed now. Differential Revision: D83873937
Summary: Not supporting quantized relu default, so removing it from ref_implementations Differential Revision: D83874866
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14788
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 1 Cancelled JobAs of commit 4927981 with merge base b021fd0 ( NEW FAILURES - The following jobs have failed:
CANCELLED JOB - The following job was cancelled. Please retry:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Summary: Not supporting quantized relu default, so removing it from ref_implementations Differential Revision: D83874866
This PR needs a
|
Summary: Not supporting quantized relu default, so removing it from ref_implementations
Differential Revision: D83874866