-
Notifications
You must be signed in to change notification settings - Fork 680
Fixed assumption on out_shift for quantized linear #14789
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary: Continued support for custom cadence ops. Reviewed By: hsharma35, eigen-k Differential Revision: D83709868
Summary: The default for padding was incorrect, adding default with the correct dtype Differential Revision: D83873533
…ariants Summary: Fix to just call the per tensor variants for quantized conv and quantized relu, since those are the only ones we are supporting. Differential Revision: D83873738
Summary: The original pass didn't fetch the user-provided zero point if it existed, it just assumed a hard-coded zero point. Fixed now. Differential Revision: D83873937
Summary: Not supporting quantized relu default, so removing it from ref_implementations Differential Revision: D83874866
Summary: out shift should be int32 Differential Revision: D83875670
1 similar comment
This PR needs a
|
Summary: out shift should be int32
Differential Revision: D83875670